[gentoo-user] emerge firefox-52.4.0 compile failure

2017-10-07 Thread Grant Edwards
When I did my usual update today firefox 52.4.0 failed to build.
There are thousands of compiler warnings in the build log, but the
only thing I can find that looks like an error is this:

/usr/bin/x86_64-pc-linux-gnu-g++ [...] 
/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/ff/gfx/thebes/Unified_cpp_gfx_thebes0.cpp
[...]
In file included from 
/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/ff/gfx/thebes/Unified_cpp_gfx_thebes0.cpp:65:0:
/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/gfx/thebes/gfxFont.cpp:2625:29:
 error: 'mozilla::gfx::ShapedTextFlags' has not been declared
/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/gfx/thebes/gfxFont.cpp:2626:24:
 error: 'RoundingFlags' has not been declared
/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/gfx/thebes/gfxFont.cpp:2618:1:
 error: template-id 'GetShapedWord<>' for 'gfxShapedWord* 
gfxFont::GetShapedWord(gfxFont::DrawTarget*, const uint8_t*, uint32_t, 
uint32_t, gfxFont::Script, bool, int32_t, int, int, gfxTextPerfMetrics*)' does 
not match any template declaration
[...]
make[4]: *** 
[/var/tmp/portage/www-client/firefox-52.4.0/work/firefox-52.4.0esr/config/rules.mk:951:
 Unified_cpp_gfx_thebes0.o] Error 1
make[4]: *** Waiting for unfinished jobs

Google provides zero hits for any of those three errors.

Does this look familiar to anybody?

--
Grant









Re: [gentoo-user] Re: eth over usb

2017-10-07 Thread mad.scientist.at.large
I'm blessed with city operated fiber to premise gigabit ethernet, when they do 
an install they check with a laptop with a gigabit usb3 adapter.  it actually 
got about 960Mb/s.

--
"Informed delivery" is just an excuse for the post office to compile data 
basses for sale to marketing firms and those even less reputable, it is a gross 
abuse of the postal systems special access to our lives.


6. Oct 2017 01:54 by p...@xvalheru.org:


> On 2017-10-06 00:44, Neil Bothwick wrote:
>> On Thu, 5 Oct 2017 15:15:11 -0700, Ian Zimmerman wrote:
>>
>>> > I'm installing gentoo on new laptop which doesn't have eth slot. I
>>> > have i-tec usb-eth adapter which works fine (tested on linux live
>>> > distribution).
>>>
>>> Can you get 100Mbit/s with it?
>>>
>>> The laptop I use also has no ethernet.  I bought a USB dongle for that
>>> but it turns out it can only do the original 10Mbit/s, half-duplex; much
>>> slower than wifi and even somewhat slower than the WAN connection here.
>>
>> I have a gigabit USB 3.0 to ethernet adaptor here. It's branded Digitus
>> which probably means nothing, lsusb shows
>>
>> ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet
>
> The one I have is also USB3 and looks fast, but I didn't test it yet - right 
> now trying to setup gentoo :-)
>
> Pat
>
> 
> Freehosting PIPNI - > http://www.pipni.cz

Re: [gentoo-user] sys-apps/texinfo-6.5: Aborted /usr/bin/perl /usr/bin/perl ../tp/texi2any

2017-10-07 Thread Kent Fredric
On Fri, 6 Oct 2017 03:43:50 -0400
Andrey Moshbear  wrote:

> Hi;
> 
> texi2any fails with SIGABRT when compiling texinfo:
> 
> emerge -1v =sys-apps/texinfo-6.5: http://dpaste.com/0XJMVRV
> emerge --info: http://dpaste.com/1DDRESJ
> 
> What's the failure cause and appropriate solution or workaround?
> 
> -- AV
> 

This looks like this somehow, https://bugs.gentoo.org/622576
But it might not be.

Either way, all this scares me: 

/bin/sh: line 15:  1456 Aborted /usr/bin/perl ../tp/texi2any -I 
. -o info-stnd.info `test -f 'info-stnd.texi' || echo './'`info-stnd.texi
*** Error in `/usr/bin/perl': free(): invalid pointer: 0x01e92cc8 ***
=== Backtrace: =
/lib64/libc.so.6(+0x76bbb)[0x7fc819991bbb]
/lib64/libc.so.6(+0x7e385)[0x7fc81385]
/lib64/libc.so.6(+0x7ed6e)[0x7fc81d6e]
../tp/Texinfo/MiscXS/.libs/MiscXS.so(xs_abort_empty_line+0x2f3)[0x7fc8177dee7c]
../tp/Texinfo/MiscXS/.libs/MiscXS.so(xs_merge_text+0x8c1)[0x7fc8177e103f]
../tp/Texinfo/MiscXS/.libs/MiscXS.so(+0x2980)[0x7fc8177dd980]
/usr/lib64/libperl.so.5.24(Perl_pp_entersub+0x2ad)[0x7fc819e0a6cd]

And probably warrants a bug with texinfo



pgpxGQVA0ZnWM.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Neil Bothwick
On Sat, 07 Oct 2017 19:11:42 +0200, J. Roeleveld wrote:

> > > mdbox? Is this a single file per mail folder?  
> > 
> > It's multiple mails per file and multiple files per mailbox.
> >   
> > > The main reason I switched to maildir several decades ago was
> > > precisely the issues (by design) mbox has.
> > > A single corrupted email WILL kill the entire folder.  
> > 
> > https://wiki2.dovecot.org/MailboxFormat/dbox  
> 
> Interesting, but I still consider multiple emails inside a single file
> a recepy for disaster.

It increases the risk, but by how much? The wiki page doesn't give any
indication of how many mails are stored in each file. Is is 5 mails per
file or 5 files per mailbox?
 
> The following is another cause for concern:
> "This also means that you must not lose the dbox index files, they
> can't be regenerated without data loss. "

I was concerned about that but further reading indicates that these files
hold only metadata. You may lose information on which mails have been
read or flagged but the mails are still there.

It also adds complication at the MDA level as procmail and friends can't
simply save the mail in a directory. On the other hand, Dovecot is mature
and well used software to these concerns have probably already been
addressed somewhere.


-- 
Neil Bothwick

Blessed be the pessimist for he hath made backups.


pgpaEot3gjrEz.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Rich Freeman
On Sat, Oct 7, 2017 at 12:12 PM, Neil Bothwick  wrote:
> On Sat, 7 Oct 2017 07:06:24 -0400, Rich Freeman wrote:
>
>> btrfs isn't horrible, but it basically hasn't been optimized at all.
>> The developers are mainly focused on getting it to not destroy your
>> data, with mixed success.  An obvious example of this is that if you
>> read a file from a pair of mirrors, the filesystem decides which drive
>> in the pair to use based on whether the PID doing the read is even or
>> odd.
>>
>> Fundamentally I haven't seen any arguments as to why btrfs should be
>> any worse than zfs.  It just hasn't been implemented completely.  But,
>> if you want a filesystem today and not in 10 years you need to take
>> that into account.
>
> I switched from ZFS to btrfs a few years ago when it appeared that ZFS
> wasn't really going anywhere while btrfs was under active development. It
> looks like I backed the wrong horse and should investigate switching back.
>

Well, they're both FOSS, and honestly I feel like btrfs has more
potential, but zfs is much more usable today.  Btrfs has features
which make it a lot more flexible in smaller installs (like being able
to remove disks, and treating snapshots as full citizens).  However,
zfs generally can get the job done and is far less likely to eat your
data in the process.  I was also a btrfs hold-out for a long time, and
I look forward to using it again some day, but it hasn't matured like
I originally hoped.

-- 
Rich



Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread J. Roeleveld
On Saturday, October 7, 2017 6:13:57 PM CEST Neil Bothwick wrote:
> On Sat, 07 Oct 2017 14:59:39 +0200, J. Roeleveld wrote:
> > > Although, I will also be switching to dovecot's mdbox format when I
> > > set up my next server, so the issue of lots of small files won't be
> > > nearly as big.
> > 
> > mdbox? Is this a single file per mail folder?
> 
> It's multiple mails per file and multiple files per mailbox.
> 
> > The main reason I switched to maildir several decades ago was precisely
> > the issues (by design) mbox has.
> > A single corrupted email WILL kill the entire folder.
> 
> https://wiki2.dovecot.org/MailboxFormat/dbox

Interesting, but I still consider multiple emails inside a single file a recepy 
for disaster.

The following is another cause for concern:
"This also means that you must not lose the dbox index files, they can't be 
regenerated without data loss. "

--
Joost



Re: [gentoo-user] Removal of classic skype

2017-10-07 Thread Mick
On Saturday, 7 October 2017 17:32:21 BST Raymond Jennings wrote:
> Due to the removal of qt4, all of its reverse dependencies are also going
> to be removed.
> 
> This decision has already been made by the qt project and is not up for
> discussion.
> 
> Furthermore, qt4 has a large number of security bugs and it has also been
> brought to my attention that it even fails to build in a few cases, and
> finally it is no longer being maintained by upstream.  Therefore, due to
> the build failures making it impossible to even install in a large number
> of cases, and thus skype classic, I'm no longer going to maintain it.
> 
> Anyone who really wants to keep classic skype still, feel free to install
> the "kde sunset" overlay to recover the soon-to-be-removed qt4 dependencies
> and for the moment make a snapshot of the ebuild before it is removed from
> the portage tree.
> 
> Also, even though they haven't *yet* followed through, microsoft has
> announced already that the classic version of skype will eventually be
> EOL'ed.  at the moment you're still able to install it as of 48 hours ago
> last time I checked, but it is on the chopping block and likely will
> eventually be removed from download, as well as banned from microsoft's
> login servers.  Once this happens further usage will be impossible.
> 
> No further support can be offered on skype classic, and it is eventually
> going to be removed from the portage tree for the reasons listed above.

Thank you for letting us know.

I have already moved to net-im/skypeforlinux because cross-platform usage of 
(classic) skype started malfunctioning some months ago now.  Skypeforlinux 
works OK for me at present.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Error while starting Docker daemon

2017-10-07 Thread Mick
On Saturday, 7 October 2017 17:23:33 BST Hubert Hauser wrote:
> I am using Gentoo as Host OS for Docker containers. 
> I have compiled
> kernel using instructions on page
> https://wiki.gentoo.org/wiki/Docker#Kernel and I have installed Docker
> from Gentoo repository.
> 
> Host system informations:
> 
> pecan@tux ~ $ uname -a
> Linux tux 4.12.12-gentoo #8 SMP Sat Oct 7 13:58:47 CEST 2017 x86_64
> Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz GenuineIntel GNU/Linux
> 
> Docker version:
> 
> pecan@tux ~ $ docker version
> Client:
>  Version:  17.03.2-ce
>  API version:  1.27
>  Go version:   go1.9.1
>  Git commit:   f5ec1e2
>  Built:Sat Oct  7 14:50:59 2017
>  OS/Arch:  linux/amd64
> Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
> Is the docker daemon running?
> 
> Look at "Cannot connect to the Docker daemon at
> unix:///var/run/docker.sock. Is the docker daemon running?". The same
> message appears if I try get docker system-wide informations:
> 
> pecan@tux ~ $ docker info
> Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
> Is the docker daemon running?

It seems you have not yet started docker.


> The same error appears if I try run the same command as sudo, so this
> error applies to daemon. I tried to check if there a mistake in Docker
> daemon privileges.
> 
> pecan@tux ~ $ sudo docker info
> Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
> Is the docker daemon running?
> 
> Based on the message I am able to say that maybe Docker daemon not
> running. I checked daemon status to make sure:
> 
> pecan@tux ~ $ sudo service docker status
>  * status: crashed

Did you try starting it from the CLI?  Any useful messages there?


> Docker daemon is crashed. To see the reason, I looked at the logs:
> 
> pecan@tux ~ $ cat /var/log/docker.log
> time="2017-10-07T14:52:13.178261811+02:00" level=info
> msg="libcontainerd: new containerd process, pid: 32311"
> time="2017-10-07T14:52:14.434232306+02:00" level=info msg="Graph
> migration to content-addressability took 0.00 seconds"
> time="2017-10-07T14:52:14.434413425+02:00" level=warning msg="Your
> kernel does not support cgroup blkio weight"

OK, start from checking your kernel has all the necessary modules compiled in, 
rebuild it and reboot.


> time="2017-10-07T14:52:14.434423960+02:00" level=warning msg="Your
> kernel does not support cgroup blkio weight_device"
> time="2017-10-07T14:52:14.434759986+02:00" level=info msg="Loading
> containers: start."
> time="2017-10-07T14:52:14.437180876+02:00" level=info msg="Firewalld
> running: false"
> Error starting daemon: Error initializing network controller: list
> bridge addresses failed: no available network
> 
> Currently, that is a point in that I do not know what should I do to be
> able run Docker daemon.
> 
> Useful informations:
> 
> - I am connected to OpenVPN through UDP.
> - I have disabled iptables and ip6tables.
> - I have set 8.8.8.8 and 8.8.4.4 DNS providers.
> - I have running privoxy and tor daemons.
> - I use OpenRC init system.
> 
> Can you help me?

I don't use docker to know any operational peculiarities of it, but others 
with more experience will hopefully chip in.  From what I see above you need 
to rebuild your kernel with the necessary modules, reboot and then try 
starting docker if it hasn't started on its own.

HTH.
-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Removal of classic skype

2017-10-07 Thread Raymond Jennings
Due to the removal of qt4, all of its reverse dependencies are also going
to be removed.

This decision has already been made by the qt project and is not up for
discussion.

Furthermore, qt4 has a large number of security bugs and it has also been
brought to my attention that it even fails to build in a few cases, and
finally it is no longer being maintained by upstream.  Therefore, due to
the build failures making it impossible to even install in a large number
of cases, and thus skype classic, I'm no longer going to maintain it.

Anyone who really wants to keep classic skype still, feel free to install
the "kde sunset" overlay to recover the soon-to-be-removed qt4 dependencies
and for the moment make a snapshot of the ebuild before it is removed from
the portage tree.

Also, even though they haven't *yet* followed through, microsoft has
announced already that the classic version of skype will eventually be
EOL'ed.  at the moment you're still able to install it as of 48 hours ago
last time I checked, but it is on the chopping block and likely will
eventually be removed from download, as well as banned from microsoft's
login servers.  Once this happens further usage will be impossible.

No further support can be offered on skype classic, and it is eventually
going to be removed from the portage tree for the reasons listed above.


[gentoo-user] Error while starting Docker daemon

2017-10-07 Thread Hubert Hauser
I am using Gentoo as Host OS for Docker containers. I have compiled
kernel using instructions on page
https://wiki.gentoo.org/wiki/Docker#Kernel and I have installed Docker
from Gentoo repository.

Host system informations:

    pecan@tux ~ $ uname -a
    Linux tux 4.12.12-gentoo #8 SMP Sat Oct 7 13:58:47 CEST 2017 x86_64
Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz GenuineIntel GNU/Linux

Docker version:

    pecan@tux ~ $ docker version
    Client:
 Version:  17.03.2-ce
 API version:  1.27
 Go version:   go1.9.1
 Git commit:   f5ec1e2
 Built:    Sat Oct  7 14:50:59 2017
 OS/Arch:  linux/amd64
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?

Look at "Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?". The same
message appears if I try get docker system-wide informations:

    pecan@tux ~ $ docker info
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?

The same error appears if I try run the same command as sudo, so this
error applies to daemon. I tried to check if there a mistake in Docker
daemon privileges.

    pecan@tux ~ $ sudo docker info
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?

Based on the message I am able to say that maybe Docker daemon not
running. I checked daemon status to make sure:

    pecan@tux ~ $ sudo service docker status
 * status: crashed

Docker daemon is crashed. To see the reason, I looked at the logs:

    pecan@tux ~ $ cat /var/log/docker.log
    time="2017-10-07T14:52:13.178261811+02:00" level=info
msg="libcontainerd: new containerd process, pid: 32311"
    time="2017-10-07T14:52:14.434232306+02:00" level=info msg="Graph
migration to content-addressability took 0.00 seconds"
    time="2017-10-07T14:52:14.434413425+02:00" level=warning msg="Your
kernel does not support cgroup blkio weight"
    time="2017-10-07T14:52:14.434423960+02:00" level=warning msg="Your
kernel does not support cgroup blkio weight_device"
    time="2017-10-07T14:52:14.434759986+02:00" level=info msg="Loading
containers: start."
    time="2017-10-07T14:52:14.437180876+02:00" level=info msg="Firewalld
running: false"
    Error starting daemon: Error initializing network controller: list
bridge addresses failed: no available network

Currently, that is a point in that I do not know what should I do to be
able run Docker daemon.

Useful informations:

- I am connected to OpenVPN through UDP.
- I have disabled iptables and ip6tables.
- I have set 8.8.8.8 and 8.8.4.4 DNS providers.
- I have running privoxy and tor daemons.
- I use OpenRC init system.

Can you help me?




Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Neil Bothwick
On Sat, 07 Oct 2017 14:59:39 +0200, J. Roeleveld wrote:

> > Although, I will also be switching to dovecot's mdbox format when I
> > set up my next server, so the issue of lots of small files won't be
> > nearly as big.  
> 
> mdbox? Is this a single file per mail folder?

It's multiple mails per file and multiple files per mailbox.

> The main reason I switched to maildir several decades ago was precisely
> the issues (by design) mbox has.
> A single corrupted email WILL kill the entire folder.

https://wiki2.dovecot.org/MailboxFormat/dbox


-- 
Neil Bothwick

For every action, there is an equal and opposite malfunction.


pgpQ2FiTZtFGj.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Neil Bothwick
On Sat, 7 Oct 2017 07:06:24 -0400, Rich Freeman wrote:

> btrfs isn't horrible, but it basically hasn't been optimized at all.
> The developers are mainly focused on getting it to not destroy your
> data, with mixed success.  An obvious example of this is that if you
> read a file from a pair of mirrors, the filesystem decides which drive
> in the pair to use based on whether the PID doing the read is even or
> odd.
> 
> Fundamentally I haven't seen any arguments as to why btrfs should be
> any worse than zfs.  It just hasn't been implemented completely.  But,
> if you want a filesystem today and not in 10 years you need to take
> that into account.

I switched from ZFS to btrfs a few years ago when it appeared that ZFS
wasn't really going anywhere while btrfs was under active development. It
looks like I backed the wrong horse and should investigate switching back.


-- 
Neil Bothwick

C music backward: get yer dog, wife, job, truck, kids, and sobriety
back.


pgpIkfcfhzYDe.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread J. Roeleveld
On Saturday, October 7, 2017 11:28:08 AM CEST Tanstaafl wrote:
> On 10/6/2017, 2:12:00 PM, J. Roeleveld  wrote:
> > I had a large partition with reiserfs.
> > Running fsck always failed due to running out of memory.
> > 
> > Partition was quite a bit larger than 2TB (around 6TB) and contained
> > a huge (millions) amount of files, > but having an fsck become
> > impossible with 16GB memory available was rather annoying.
> 
> Ah, yes, I had a similar problem occasionally when a user would decide
> to delete (or move to a different folder) a bunch (as many as tens of
> thousands) of messages at once... Thunderbird would go non responsive,
> and the server was brought to its knees. I'd have to kill their server
> processes, and then the user would end up with a bunch of duplicate
> messages in their maildirs.
> 
> Very annoying.

Actually, I used to do this a lot using a webmail client (when I was still 
able to run squirrelmail without having to change back to an old PHP version) 
and never actually had any issues with this.
Neither with reiserfs or ext4.

I would put that down to either hardware or issues with the chosen IMAP-
server. For reference: I have been using Cyrus for a very long time.

--
Joost



Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread J. Roeleveld
On Saturday, October 7, 2017 11:18:33 AM CEST Tanstaafl wrote:
> On 10/6/2017, 8:53:27 AM, Philip Webb  wrote:
> > 171005 christos kotsis wrote:
> >> I just noticed that ReiserFS has significant performance
> >> over ext3, 4 when dealing with small files.
> > 
> > I've long relied on ReiserFS for everything except  /boot
> > & have never had any problems with my files or drives.
> > I have many small files + a few big PDFs -- perhaps  c 20 MB ea  --
> > & the big ones simply stay where I put them, so no changes to handle.
> 
> I used ReiserFS for many - 8+ - years on our old mail server, selected
> for its performance with large numbers of small (maildir) files, and
> never had a problem.

Same here, apart from that one partition where the fsck never worked.

> But during the last rebuild when virtualizing everything, sometime
> around 2012, I switched to XFS, and believe I saw a performance gain,
> and no more long fsck's during the rare reboots... and again, no problems.

My last rebuild was earlier this year, my mail had already been migrated to 
ext4 without issues. (Did not notice any performance issues)

> Personally, I can't wait until btrfs is fully ready/stable, and have
> been considering FreeBSD (or FreeNAS) just for ZFS, for the reliability
> factor, but have wondered about performance for mail servers.
> 
> Anyone have any experience with comparing performance with either btrfs
> or ZFS against either ReiserFS or XFS for a maildir based mail server?

My mailserver (Cyrus) uses ext4 for the mailboxes.
This is on a partition which is accessed via iSCSI.
Which is a zvol on a ZFS pool.

Eg: disks <-> ZFS <-> zvol <-> iSCSI <-> ext4

I am not noticing any significant performance issues, the ones I am can be 
resolved by adding a dedicated SLOG en L2ARC, but this will only help the 
systems hanging in the rack as those are connected with a 20Gbe link. Rest of 
the systems won't get more than 1Gbe.

I have several large mailboxes:
- postgresql-hackers = 195,000 items
- gentoo-user = 240,000 items
- Xen-devel = 366,000 items

The others are below 100,000.
I use these as archives and regularly search through these before reverting to 
Google or asking on the relevant mailing lists.

> Although, I will also be switching to dovecot's mdbox format when I set
> up my next server, so the issue of lots of small files won't be nearly
> as big.

mdbox? Is this a single file per mail folder?
The main reason I switched to maildir several decades ago was precisely the 
issues (by design) mbox has.
A single corrupted email WILL kill the entire folder.

--
Joost



Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Rich Freeman
On Sat, Oct 7, 2017 at 6:28 AM, Neil Bothwick  wrote:
> On Sat, 7 Oct 2017 05:18:33 -0400, Tanstaafl wrote:
>
>> Anyone have any experience with comparing performance with either btrfs
>> or ZFS against either ReiserFS or XFS for a maildir based mail server?
>
> I tried btrfs on a mail server and it was unbearably slow. Disabling
> copy-on-write made a big difference, but it still went a lot faster when
> I switched to ext4.
>
> I haven't used XFS in years, maybe it's time to revisit it.
>

I haven't used xfs in a while, but here is my sense of things, for a
basic configuration (filesystem running on one drive or a mirrored
pair):

xfs > ext4 > zfs >>> btrfs

At least, that is in terms of most conventional measures of
performance (reading and writing files on a typical filesystem).  If
you want to measure performance in terms of how long your system is
down after a controller error then both zfs and btrfs will have an
advantage.  I mention it because I think that integrity shouldn't take
a back seat to performance 99% of the time.  It has performance
benefits of its own, but you only see them every couple of years when
something fails.

btrfs isn't horrible, but it basically hasn't been optimized at all.
The developers are mainly focused on getting it to not destroy your
data, with mixed success.  An obvious example of this is that if you
read a file from a pair of mirrors, the filesystem decides which drive
in the pair to use based on whether the PID doing the read is even or
odd.

Fundamentally I haven't seen any arguments as to why btrfs should be
any worse than zfs.  It just hasn't been implemented completely.  But,
if you want a filesystem today and not in 10 years you need to take
that into account.

Now, ZFS has a bunch of tricks available to improve things like SSD
read caches and write logs.  But, you could argue that other
filesystems support separate journal devices and there is bcache so I
think if you want to look at those features you need to compare apples
to apples.  ZFS certainly integrates it all nicely, but then it has
other "features" like not being able to remove a drive from a storage
pool, or revert to a snapshot without deleting all the subsequent
snapshots.

In general though I think zfs will always suffer a bit in performance
because it is copy-on-write.  If you want to change 1 block in the
middle of a file, ext4 and xfs can just write over that 1 block, while
zfs and btrfs are going to write that block someplace else and do a
metadata dance to map it over the original block.  I just don't see
how that will ever be faster.  Of course, if you have a hardware
failure in the middle of an operation zfs and btrfs basically
guarantee that the writes behave as if they were atomic, while you
only get that benefit with ext4/xfs if you do full journaling with a
significant performance hit, and if you're using mdadm underneath then
you lose that guarantee.  Both zfs and btrfs avoid the raid write hole
(though to be fair you don't want to go anywhere near parity raid on
btrfs anytime soon).

I'm not saying that there isn't a place for performance-above-all.
For an ephemeral worker node you already have 47 backups running and
if the node fails you restart it, so if it needs to write some data to
disk performance is probably the only concern.  Ditto for any data
that has no long-term value/etc.  However, for most general-purpose
filesystems I think integrity should be the #1 concern, because you
won't notice that 20us access time difference but you probably will
notice hour spent restoring from backups, assuming you even have
backups.

-- 
Rich



Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Neil Bothwick
On Sat, 7 Oct 2017 05:18:33 -0400, Tanstaafl wrote:

> Anyone have any experience with comparing performance with either btrfs
> or ZFS against either ReiserFS or XFS for a maildir based mail server?

I tried btrfs on a mail server and it was unbearably slow. Disabling
copy-on-write made a big difference, but it still went a lot faster when
I switched to ext4.

I haven't used XFS in years, maybe it's time to revisit it.


-- 
Neil Bothwick

Walk softly and carry a fully charged phazer.


pgpRvvjXncZeO.pgp
Description: OpenPGP digital signature


[gentoo-user] Why I'm unable to run Vagrant as non-root user?

2017-10-07 Thread Hubert Hauser
I've installed Vagrant in Gentoo from repository. I'm using Ruby 2.2.8.
I've got following error when I was tried run Vagrant as non-root user:

    pecan@tux ~ $ vagrant
   
/usr/lib64/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:55:in
`require': cannot load such file -- checkpoint (LoadError)
    from
/usr/lib64/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:55:in
`require'
    from
/usr/lib64/ruby/gems/2.2.0/gems/vagrant-1.9.8/lib/vagrant/environment.rb:7:in
`'
    from
/usr/lib64/ruby/gems/2.2.0/gems/vagrant-1.9.8/bin/vagrant:118:in `'

The result of `ruby
/usr/lib64/ruby/gems/2.2.0/gems/vagrant-2.0.0/bin/vagrant`:

    pecan@tux ~ $ ruby
/usr/lib64/ruby/gems/2.2.0/gems/vagrant-2.0.0/bin/vagrant
   
/usr/lib64/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:55:in
`require': cannot load such file -- log4r (LoadError)
    from
/usr/lib64/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:55:in
`require'
    from
/usr/lib64/ruby/gems/2.2.0/gems/vagrant-2.0.0/bin/vagrant:61:in `'

The result of `sudo vagrant`:

    pecan@tux ~ $ sudo vagrant
    Usage: vagrant [options]  []
   
    -v, --version    Print the version and exit.
    -h, --help   Print this help.
   
    Common commands:
 box manages boxes: installation, removal, etc.
 destroy stops and deletes all traces of the vagrant machine
 global-status   outputs status Vagrant environments for this user
 halt    stops the vagrant machine
 help    shows the help for a subcommand
 init    initializes a new Vagrant environment by
creating a Vagrantfile
 login   log in to HashiCorp's Vagrant Cloud
 package packages a running vagrant environment into a box
 plugin  manages plugins: install, uninstall, update, etc.
 port    displays information about guest port mappings
 powershell  connects to machine via powershell remoting
 provision   provisions the vagrant machine
 push    deploys code in this environment to a
configured destination
 rdp connects to machine via RDP
 reload  restarts vagrant machine, loads new Vagrantfile
configuration
 resume  resume a suspended vagrant machine
 snapshot    manages snapshots: saving, restoring, etc.
 ssh connects to machine via SSH
 ssh-config  outputs OpenSSH valid configuration to connect
to the machine
 status  outputs status of the vagrant machine
 suspend suspends the machine
 up  starts and provisions the vagrant environment
 validate    validates the Vagrantfile
 version prints current and latest Vagrant version
   
    For help on any individual command run `vagrant COMMAND -h`
   
    Additional subcommands are available, but are either more advanced
    or not commonly used. To see all subcommands, run the command
    `vagrant list-commands`.

I'm using the system Ruby. As you can see Vagrant with sudo works but
I've question why I'm unable to run Vagrant as non-root user? What
should I do to be able run Vagrant as non-root user?

I'm counting for help.




Re: [gentoo-user] Wiki-viewer anyone?

2017-10-07 Thread tuxic
On 10/07 08:19, Neil Bothwick wrote:
> On Sat, 7 Oct 2017 00:41:26 +0200, tu...@posteo.de wrote:
> 
> > I dont want to convert the md-files to html, since I want to update
> > the repo later (see above).
> > The problem are files referencing other files. Reading the md-files
> > via vim (for example) would imply to grab all references by hand.
> > Fortheremore, tne docs are filled with graphics (for example images
> > of the fonts, which can be used), which cannot be displayed with an 
> > ASCII-editor.
> > Formatting is necassary with this docs...
> 
> I see nothing wrong with generating HTML versions, as long as you don't
> mind doing so after each time you update (you cold use a scriot to do
> both for you). But if you really want to to work directly with the md
> files, this may work for you https://github.com/joeyespo/grip
> 
> 
> -- 
> Neil Bothwick
> 
> No trees were harmed in the sending of this message. However, a large
> number of electrons were terribly inconvenienced.


Hi Neil,

thanks for the info and the link.

I tried it. Unfortunately grip failed with an error.
Dont have the time to dive for it ...

Cheers
Meino






Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Tanstaafl
On 10/6/2017, 2:12:00 PM, J. Roeleveld  wrote:
> I had a large partition with reiserfs.
> Running fsck always failed due to running out of memory.
> 
> Partition was quite a bit larger than 2TB (around 6TB) and contained 
> a huge (millions) amount of files, > but having an fsck become
> impossible with 16GB memory available was rather annoying.

Ah, yes, I had a similar problem occasionally when a user would decide
to delete (or move to a different folder) a bunch (as many as tens of
thousands) of messages at once... Thunderbird would go non responsive,
and the server was brought to its knees. I'd have to kill their server
processes, and then the user would end up with a bunch of duplicate
messages in their maildirs.

Very annoying.



Re: [gentoo-user] {OT?} which fs on 1.8TB partition

2017-10-07 Thread Tanstaafl
On 10/6/2017, 8:53:27 AM, Philip Webb  wrote:
> 171005 christos kotsis wrote:
>> I just noticed that ReiserFS has significant performance
>> over ext3, 4 when dealing with small files.

> I've long relied on ReiserFS for everything except  /boot
> & have never had any problems with my files or drives.
> I have many small files + a few big PDFs -- perhaps  c 20 MB ea  --
> & the big ones simply stay where I put them, so no changes to handle.

I used ReiserFS for many - 8+ - years on our old mail server, selected
for its performance with large numbers of small (maildir) files, and
never had a problem.

But during the last rebuild when virtualizing everything, sometime
around 2012, I switched to XFS, and believe I saw a performance gain,
and no more long fsck's during the rare reboots... and again, no problems.

Personally, I can't wait until btrfs is fully ready/stable, and have
been considering FreeBSD (or FreeNAS) just for ZFS, for the reliability
factor, but have wondered about performance for mail servers.

Anyone have any experience with comparing performance with either btrfs
or ZFS against either ReiserFS or XFS for a maildir based mail server?

Although, I will also be switching to dovecot's mdbox format when I set
up my next server, so the issue of lots of small files won't be nearly
as big.



Re: [gentoo-user] Linode discontinuing Xen, migrating to KVM

2017-10-07 Thread Tanstaafl
On 10/7/2017, 12:09:07 AM, Stroller  wrote:
> 
>> On 6 Oct 2017, at 15:31, Tanstaafl  wrote:
>>
 Second, do you have rc_sys defined, or are you using auto-detect (is it
 just commented out)?
>>>
>>> Just commented out.
>>
>> This is the one I'm worried about - how to change it back if it totally
>> breaks the ability to even boot.
> 
> Detach the drive from the VM, and attach it as /dev/sd[cdefgh] on another VM?
> 
> See: Linodes »  » Edit Configuration Profile » Block 
> Device Assignment
> 
> Also in the dropdowns there is an option for "Recovery -Finnix (iso)".

Thanks, yes, I found that in the docs when reading, but was wondering if
there was some lind of grub command-line boot option I could pass (would
be much easier)...

Anyway, wasn't necessary, the migration went perfectly...

1. Change to the 64 bit kernel, reboot

2. Enter migration queue

3. Wait.. about 5 minutes

4. Done.

:)

Thanks to everyone who responded!



Re: [gentoo-user] Wiki-viewer anyone?

2017-10-07 Thread Neil Bothwick
On Sat, 7 Oct 2017 00:41:26 +0200, tu...@posteo.de wrote:

> I dont want to convert the md-files to html, since I want to update
> the repo later (see above).
> The problem are files referencing other files. Reading the md-files
> via vim (for example) would imply to grab all references by hand.
> Fortheremore, tne docs are filled with graphics (for example images
> of the fonts, which can be used), which cannot be displayed with an 
> ASCII-editor.
> Formatting is necassary with this docs...

I see nothing wrong with generating HTML versions, as long as you don't
mind doing so after each time you update (you cold use a scriot to do
both for you). But if you really want to to work directly with the md
files, this may work for you https://github.com/joeyespo/grip


-- 
Neil Bothwick

No trees were harmed in the sending of this message. However, a large
number of electrons were terribly inconvenienced.


pgpcFWEzWhRnC.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] gtk+ emerge fails

2017-10-07 Thread R0b0t1
On Fri, Oct 6, 2017 at 9:28 PM,   wrote:
> On Fri, Oct 06, 2017 at 05:30:06PM +, Perry S Glenn wrote:
>>
>> Hi,
>>
>> gtk+ 2 and 3 both fail to emerge
>> glib-compile-resources gtk.gresource.xml \
>> --target=gtkresources.c 
>> --sourcedir=/var/tmp/portage/x11-libs/gtk+-3.22.16/work/gtk+-3.22.16/gtk 
>> --c-name _gtk --generate-source --manual-register
>> failed to load 
>> "/var/tmp/portage/x11-libs/gtk+-3.22.16/work/gtk+-3.22.16/gtk/theme/Adwaita/assets/bullet-symbolic.symbolic.png":
>>  Couldn't recognize the image file format for file 
>> '/var/tmp/portage/x11-libs/gtk+-3.22.16/work/gtk+-3.22.16/gtk/theme/Adwaita/assets/bullet-symbolic.symbolic.png
>>
>> Anyone else seen this?
>
> Nm shortly after I sent this first mail about this my boxes were unusable so 
> much for updating through unencrypted protocol.
>
> gl
>

Do you care to explain?