Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Stefan G. Weichinger
On 24.02.2015 17:01, Rich Freeman wrote:

 Seems like there should be a systemd-users mailing list, actually.
 This sort of situation is completely distro-agnostic.

Yes! And systemd-devel ml is always kind of they will laugh at me and
say ugly things! ;-)

 You certainly could design such an application.  If you do so I'd
 consider pulling the journald logs in JSON format.  I'd also see if
 somebody actually has written a journald library/class/etc for your
 language of choice - it seems like that is the sort of thing that is
 likely to exist soon if not already.  One of the goals journald (and
 systemd) is to provide more of an API for services so that there is
 less parsing of text files and communicating via signals/etc.  I'm
 sure with appropriate permissions a process could just obtain log
 entries via dbus, and using cursors poll for new entries (or maybe
 there is a push/stream mechanism).
 
 Really though it seems like the solution is a generic log monitor with
 rules for such things, with the monitor utilizing the JSON data from
 journald for richer metadata/efficiency/accuracy.

Well, it might be a nice challenge to do so ... sure!

The real world challenge is that I have to provide the postfix-logs for
one domain on one server (so far) for one admin to browse through
(without giving him ssh-access ... it's a noob ...)

No overkill paid here.

But maybe I ask on systemd-devel-ml ...

Stefan




Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 3:27 PM, Stefan G. Weichinger li...@xunil.at wrote:
 On 24.02.2015 17:01, Rich Freeman wrote:

 Seems like there should be a systemd-users mailing list, actually.
 This sort of situation is completely distro-agnostic.

 Yes! And systemd-devel ml is always kind of they will laugh at me and
 say ugly things! ;-)


Honestly, I've never received treatment like this on the systemd lists
or irc channel.  However, when I post there it tends to be about
specific bugs with details/etc.  You're not really talking about
systemd development, but rather questions around best practices for
using it, discussion, etc.  It would be like posting this thread on
gentoo-dev - it isn't the purpose of the list, but such discussion is
completely welcome here.  If we were discussing an eclass change, on
the other hand, gentoo-dev would be the right place.

You might try asking on #systemd.  That doesn't really have a defined
topic afaict, and discussion is a lot more free.  Of course, the
downside to irc is you only hear from people online at the moment.
You might ask about a user mailing list.

-- 
Rich



[gentoo-user] the new ssd, is it happy?

2015-02-24 Thread Stefan G. Weichinger

ordered myself a new and shiny ssd last week.

one thinkpad still had that 60GB OCZ Vertex3 and that was a bit tight
now and then.

So I ordered a Samsung SSD 850 EVO 500GB for my desktop and planned to
move the former 840 EVO 250GB to the thinkpad.

Done today.

Moving was rather *boring* -

partition ssd, add new partition to btrfs filesystem, remove old
partition from btrfs filesystem, wait ~15 minutes, in the meantime copy
over the UEFI ESP to the new disk, install gummiboot there ...

it booted up at first time ... oh my, what has happened to good old gentoo?

:-P

(resized stuff, yes ... but no big blockers anywhere)

What I would like to discuss now:

# dmesg  | grep ata1
[1.930869] ata1: SATA max UDMA/133 abar m2048@0xfb205000 port
0xfb205100 irq 26
[2.235852] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[2.237378] ata1.00: supports DRM functions and may not be fully
accessible
[2.237506] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[2.237509] ata1.00: ATA-9: Samsung SSD 850 EVO 500GB, EMT01B6Q, max
UDMA/133
[2.237521] ata1.00: 976773168 sectors, multi 1: LBA48 NCQ (depth
31/32), AA
[2.237979] ata1.00: supports DRM functions and may not be fully
accessible
[2.238071] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[2.238166] ata1.00: configured for UDMA/133
[20207.916327] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[20207.916528] ata1.00: supports DRM functions and may not be fully
accessible
[20207.916598] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[20207.918249] ata1.00: supports DRM functions and may not be fully
accessible
[20207.918325] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
[20207.918419] ata1.00: configured for UDMA/133


Are these failed lines ok?

The box itself is a bit older, a

Hewlett-Packard HP Elite 7300 Series MT/2AB5, BIOS 7.12

(I never found a BIOS update! btw ...)

so maybe the chipset lacks features the SSD might be able to use.

Everything works fine so far, I would just like to understand if things
are OK with this new piece of hardware.

additional:

Device Model: Samsung SSD 850 EVO 500GB
Firmware Version: EMT01B6Q

I did not find any firmware update online, do you agree?

Thanks, regards, Stefan



Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 11:31 AM, Todd Goodman t...@bonedaddy.net wrote:

 But the device is still doing wear leveling and bad block
 replacement so you're beholden to those algorithms and what you think
 you're allocating as sequential blocks of the flash are not necessarily so.

 Of course any decent wear leveling algorithm is still going to work
 fine, but it seems to me like the wear leveling is still occuring in the
 device and the filesystem is beneficial for use on flash based devices
 for other reasons.


Sure, if the algorithm is brain-dead it can mess up the simple task
that f2fs hands it.

However, I don't think a good wear-leveling algorithm will do nearly
as good a job as something like f2fs.  Suppose you have 10% of the
disk that gets heavily overwritten, and 90% that is fairly static.
The drive will have to detect that and decide when to move some of the
90% around just to allow that area to wear evenly.  With f2fs every
block wears at the same rate by design.  Sure, the wear-levelling
algorithm can look at that block with 500 erases and look for
someplace to relocate it, but it will find that all the other blocks
on the disk have either 500 or 501 erases and hopefully the algorithm
is smart enough to do nothing in that case.

The real killer for SSDs is writes to a unit of space smaller than an
erase block, because that requires copying and erasing.  f2fs
completely eliminates that scenario, and I believe SSD-aware
filesystems like btrfs attempt to do this as well (since it doesn't
overwrite anything in-place).

Still, block reallocation at the device level shouldn't be a problem
as long as the device relocates entire erase blocks at a time so that
everything stays aligned, and I can't imagine why the designers of an
SSD would want to do reallocation at anything other than the erase
block level (or larger).

I'd be interested if somebody has more hands-on experience with how
these algorithms work together.  Don't get me wrong - I agree that
ultimately the device has the final say in what data gets written
where.  I just think that f2fs tends to promote data being written to
disk in a manner that makes things optimal for the firmware.  If the
firmware authors assume that everything is FAT32 and that the first
10% of the disk needs to be moved around all the time regardless of
actual write patterns, then sure you're disk won't wear right and will
probably suffer performance-wise.

-- 
Rich



Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Stefan G. Weichinger
Am 24.02.2015 um 03:14 schrieb Canek Peláez Valdés:

 Stefan, if you already have systemd (which I believe you do), why don't you
 compile in the support for microhttpd and use the journal? This is the
 exact scenario for which systemd-journal-gatewayd[1] was written.

very good ... enabled it on one of my machines, looks good.

I am unsure if it will be possible to see only postfix.service (easier)
AND only the lines relevant for the domain of the specific customer by
doing this?

thanks, Stefan




Re: [gentoo-user] syslog-ng: how to read the log files

2015-02-24 Thread Matti Nykyri
 On Feb 24, 2015, at 2:50, Peter Humphrey pe...@prh.myzen.co.uk wrote:
 
 Thank Goodness! Someone who knows enough to trim out the bits of the 
 message he's not replying to.
 
 Why do you others make me page-down eight times to find what you've 
 written in reply to the last three lines of the preceding message?

+1

-- 
-Matti



Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 4:50 AM, Stefan G. Weichinger li...@xunil.at wrote:
 Am 24.02.2015 um 03:14 schrieb Canek Peláez Valdés:

 Stefan, if you already have systemd (which I believe you do), why don't you
 compile in the support for microhttpd and use the journal? This is the
 exact scenario for which systemd-journal-gatewayd[1] was written.

 very good ... enabled it on one of my machines, looks good.

 I am unsure if it will be possible to see only postfix.service (easier)

I suspect this is trivial - it looks like something like this would work:
http://.../entries?_SYSTEMD_UNIT=postfix.service

(note, you might need to tweak that - I haven't used the http gateway
personally and am going from the manpage)

 AND only the lines relevant for the domain of the specific customer by
 doing this?

I think you're going to be stuck here unless they come from different
machines or something like that.  Obviously you can pipe the output
through grep but journald will only pre-filter the output using
journal fields, like facility, priority, etc.  syslog only provided a
few fields for clients to specify, and this is probably because in the
end the data just got dumped to a text file so that it wasn't
searchable by field anyway.  It would be nice if they extended the
syslog protocol for systemd and made it possible for clients to
specify additional fields, but obviously the client would need to
support this (likely sending logs over dbus or such).

The http gateway seems like it is intended more as a transport
mechanism with some usability for ad-hoc human viewing.  It isn't a
full-fledged log analysis tool.  The fact that journald can output in
JSON with uuids for each entry should make it far easier to parse its
logs with an analysis tool, but I think all those vendors are playing
catch-up.  I suspect they'll support it fairly soon once they see
everybody using it.  From a machine parsing standpoint the fielded
binary format makes a lot more sense.

-- 
Rich



Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Todd Goodman
* Rich Freeman ri...@gentoo.org [150224 07:32]:
 On Tue, Feb 24, 2015 at 6:54 AM, Bob Wya bob.mt@gmail.com wrote:
  I would always recommend a secure erase of an SSD - if you want a fresh
  start. That will mark all the NAND cells as clear of data. That will
  benefit the longevity of your device / wear levelling.
 
 Not a bad idea, though if you're trimming your filesystem (and it
 supports this), that shouldn't be necessary, and of course a log-based
 filesystem like f2fs should promote excellent wear leveling
 automatically by design.  Granted, that doesn't help you if an f2fs
 bug eats your data.

Can you explain why a log-based filesystem like f2fs would have any
impact on wear leveling?

As I understand it, wear leveling (and bad block replacement) occurs on
the SSD itself (in the Flash Translation Layer probably.)

Of course the quality of those algorithms vary with device and are
pretty much a black box.

Todd



Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Bob Wya
I would always recommend a secure erase of an SSD - if you want a fresh
start. That will mark all the NAND cells as clear of data. That will
benefit the longevity of your device / wear levelling.

I've been messing about with native exfat over the past few months. I found
this to be a pretty decent shared partition file system - for use with MS
Windows. The read performance will saturate a 3Gbit SATA link - but write
performance is only in the order of 100Mbytes/second.

Personally having been burned by btrfs I would not try one of these
experimental file systems again... That was the same sort of pattern as
your experience. I carefully followed the Arch Wiki (large partition size -
due to COW issues, etc.) - was using it on my home brew NAS running
OpenSUSE as root /. One day it just blew up and was really screwed for
recovery (I did manage to get the few small bits of data I needed with some
Googling) - as none of the btrfs tools for this actually work! Back to ext4
for root / - now running Arch on that box... Ironically the native ZFS port
has always been stable on that box (with a very large storage array)!

Just my $0.02!!

On 24 February 2015 at 00:46, Peter Humphrey pe...@prh.myzen.co.uk wrote:

 Some list members might be interested in how I've got on with f2fs
 (flash-friendly file system).

 According to genlop I first installed f2fs on my Atom mini-server box on
 1/11/14 (that's November, for the benefit of transpondians), but I'm
 pretty sure it must have been several months before that. I installed a
 SanDisk SDSSDP-064G-G25 in late February last year and my admittedly
 fallible memory says I changed to f2fs not many months after that, as
 soon as I discovered it.

 Until two or three weeks ago I had no problems at all. Then while doing
 a routine backup tar started complaining about files having been moved
 before it could copy them. It seems I had a copy of an /etc directory
 from somewhere (perhaps a previous installation) under /root and some
 files when listed showed question marks in all fields except their
 names. I couldn't delete them, so I re-created the root partition and
 restored from a backup.

 So far so good, but then I started getting strange errors last week. For
 instance, dovecot started throwing symbol-not-found errors. Finally,
 after remerging whatever packages failed for a few days,
 /var/log/messages suddenly appeared as a binary file again, and I'm
 pretty sure that bug's been fixed.

 Time to ditch f2fs, I thought, so I created all partitions as ext4 and
 restored the oldest backup I still had, then ran emerge -e world and
 resumed normal operations. I didn't zero out the partitions with dd;
 perhaps I should have.

 I'll watch what happens, but unless the SSD has failed after only a year
 I shouldn't have any problems.

 An interesting experience. Why should f2fs work faultlessly for several
 months, then suffer repeated failures with no clear pattern?

 --
 Rgds
 Peter.




-- 

All the best,
Robert


Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 6:54 AM, Bob Wya bob.mt@gmail.com wrote:
 I would always recommend a secure erase of an SSD - if you want a fresh
 start. That will mark all the NAND cells as clear of data. That will
 benefit the longevity of your device / wear levelling.

Not a bad idea, though if you're trimming your filesystem (and it
supports this), that shouldn't be necessary, and of course a log-based
filesystem like f2fs should promote excellent wear leveling
automatically by design.  Granted, that doesn't help you if an f2fs
bug eats your data.

 Personally having been burned by btrfs I would not try one of these
 experimental file systems again...

Well, trying them is one thing, relying on them is something else.
I've had a few issues with btrfs in the last year but they've all been
of the uptime/availability nature and none has actually caused
unrecoverable data loss.  It has caused me to start moving back
towards the longterm stable branch though as the level of regressions
has been fairly high of late.

However, right now I keep everything on btrfs backed up onto ext4
using rsnapshot daily (an rsync-based tool I recommend if you're the
sort that likes rsync for backups).  So, the impact of a total
filesystem failure is limited to availability (granted, quite a bit of
it to completely restore multiple TB).  That risk is acceptable for
what I'm using it for.  Another risk would be a silent corruption that
persists longer than the number of backups I retain, but I think that
is unlikely since silent failures is one of those things btrfs is
designed to be good at detecting/preventing, and I've yet to see any
reports of this kind of failure which makes me tend to think that if
anything there is more risk of a silent corruption impacting my
backups (ie I'm contrasting the risk of btrfs quietly storing the
wrong content of a file vs the risk of a hard drive bit flip messing
up data which ext4 can't detect).

In general though there is a reason that sysadmins tend to be very
conservative with filesystems.  I doubt most even jumped onto ext4 all
that quickly even though that was very stable from the start of being
declared as such.  You really need to look at your use case and
understand the risks and benefits and how you plan to mitigate the
risks.  Something being experimental isn't a reason to automatically
avoid using it if it brings some significant benefit to your design,
as long as you've mitigated the risks.  And, of course, if your goal
is to better understand an experimental technology in a non-critical
setting you should probably just get your feet wet.

However, what you shouldn't do is just pick an experimental anything
as a go-to default for something you want to never have to fuss with.

-- 
Rich



Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Canek Peláez Valdés
On Tue, Feb 24, 2015 at 10:33 AM, Stefan G. Weichinger li...@xunil.at
wrote:
[ ... ]
 Maybe I could set up some other web-app that (a) looks at the link
 pointing to the postfix.service-logs and (b) filters them?

(With my programmer's hat on): I think the easiest way would be to create a
little client that downloads the logs in JSON format, and then do the
filtering off-site. If I'm not mistaken, libmicrohttpd supports
Accept-Encoding: gzip, and therefore the used bandwidth should not be a
lot.

Also, you can get the last cursor from the journal the first time, and next
time you download logs, you start from that cursor on, so you don't
download everything again.

I don't see many advantages on doing the filtering on-site. Specially if,
after a while, you are handling several servers.

Regards.
--
Canek Peláez Valdés
Profesor de asignatura, Facultad de Ciencias
Universidad Nacional Autónoma de México


Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Stefan G. Weichinger
On 24.02.2015 13:14, Rich Freeman wrote:

 I suspect this is trivial - it looks like something like this would work:
 http://.../entries?_SYSTEMD_UNIT=postfix.service

Yes, correct, as I thought this is the easy part.

Works:

http://mythtv.local:19531/entries?_SYSTEMD_UNIT=postfix.service

(using my mythbackend as test target).


 (note, you might need to tweak that - I haven't used the http gateway
 personally and am going from the manpage)
 
 AND only the lines relevant for the domain of the specific customer by
 doing this?
 
 I think you're going to be stuck here unless they come from different
 machines or something like that.  Obviously you can pipe the output
 through grep but journald will only pre-filter the output using
 journal fields, like facility, priority, etc.  syslog only provided a
 few fields for clients to specify, and this is probably because in the
 end the data just got dumped to a text file so that it wasn't
 searchable by field anyway.  It would be nice if they extended the
 syslog protocol for systemd and made it possible for clients to
 specify additional fields, but obviously the client would need to
 support this (likely sending logs over dbus or such).
 
 The http gateway seems like it is intended more as a transport
 mechanism with some usability for ad-hoc human viewing.  It isn't a
 full-fledged log analysis tool.  The fact that journald can output in
 JSON with uuids for each entry should make it far easier to parse its
 logs with an analysis tool, but I think all those vendors are playing
 catch-up.  I suspect they'll support it fairly soon once they see
 everybody using it.  From a machine parsing standpoint the fielded
 binary format makes a lot more sense.

Maybe I could set up some other web-app that (a) looks at the link
pointing to the postfix.service-logs and (b) filters them?

I could post to the systemd-devel-ml ... btw ;-)

Stefan




Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 8:11 AM, Todd Goodman t...@bonedaddy.net wrote:

 Can you explain why a log-based filesystem like f2fs would have any
 impact on wear leveling?

 As I understand it, wear leveling (and bad block replacement) occurs on
 the SSD itself (in the Flash Translation Layer probably.)


Well, if the device has a really dumb firmware there is nothing you
can do to prevent it from wearing itself out.  However, log-based
filesystems and f2fs in particular are designed to make this very
unlikely in practice.

Log-based filesystems never overwrite data in place.  Instead all
changes are appended into empty space, until a large region of the
disk is full.  Then the filesystem:
1.  Allocates a new unused contiguous region of the disk (which was
already trimmed).  This would be aligned to the erase block size on
the underlying SSD.
2.  Copies all data that is still in use from the oldest allocated
region of the disk to the new region.
3.  Trims the entire old region, which was aligned to the erase block
size when it was originally allocated.

So, the entire space of the disk is written to sequentially, and the
head basically eats the tail.  Every block on the drive gets written
to once before the first block on the drive gets written to twice.

The design of the filesystem is basically ideal for flash, and all the
firmware has to do is not mess up the perfect order it is handed on a
silver platter.  You never end up overwriting only part of an erase
block in place, and you're trimming very large contiguous regions of
the disk at once.  Since flash chips don't care about sequential
access, if part of the flash starts to fail the firmware just needs to
map out those blocks, and as long as it maps an entire erase block at
a time you'll get the same performance.  Of course, if part of the
flash starts to fail the erase count of the remainder of the drive
will be identical. so you're going to need a new drive soon.

I'd love to see a next-gen filesystem for flash that also takes into
account COW snapshotting/reflinks, protection from silent corruption,
and some of the RAID-like optimizations possible with btrfs/zfs.
Since log-based filesystems are COW by nature I'd think that this
would be achievable.  The other side of this would be using SSDs as
caches for something like btrfs/zfs on disk - something largely
possible with zfs today, and perhaps planned for btrfs.

-- 
Rich



[gentoo-user] confusion on profiles

2015-02-24 Thread Douglas J Hunley
I just saw this today:
   [20]  hardened/linux/amd64/no-emul-linux-x86
   [21]  hardened/linux/amd64/selinux
   [22]  hardened/linux/amd64/no-multilib
   [23]  hardened/linux/amd64/no-multilib/selinux
   [24]  hardened/linux/amd64/x32

but I don't understand the difference between 20 and 24. I thought x32 was
the path to getting rid of the emul* stuff ?
-- 
Douglas J Hunley (doug.hun...@gmail.com)
Twitter: @hunleyd   Web:
about.me/douglas_hunley
G+: http://google.com/+DouglasHunley


Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Peter Humphrey
On Tuesday 24 February 2015 07:31:26 Rich Freeman wrote:

 In general though there is a reason that sysadmins tend to be very
 conservative with filesystems.  I doubt most even jumped onto ext4 all
 that quickly even though that was very stable from the start of being
 declared as such.  You really need to look at your use case and
 understand the risks and benefits and how you plan to mitigate the
 risks.  Something being experimental isn't a reason to automatically
 avoid using it if it brings some significant benefit to your design,
 as long as you've mitigated the risks.

Yes, and that's why I felt the risk justified when I adopted f2fs in 
that box. It's a LAN server and so doesn't change much, and it's backed 
up weekly. Well, the web and http-replicator proxies do have constantly 
changing data of course, but nothing that can't be fetched again 
cheaply. 

 And, of course, if your goal is to better understand an experimental
 technology in a non-critical setting you should probably just get your
 feet wet.

Indeed. And I'm sometimes impulsive anyway. I certainly didn't conduct a 
formal risk assessment. (tabloid Shock! Horror! /tabloid)  :-)

-- 
Rgds
Peter.




Re: [gentoo-user] confusion on profiles

2015-02-24 Thread Markos Chandras
On 02/24/2015 03:53 PM, Douglas J Hunley wrote:
 I just saw this today:
 [20]  hardened/linux/amd64/no-emul-linux-x86 [21]
 hardened/linux/amd64/selinux [22]
 hardened/linux/amd64/no-multilib [23]
 hardened/linux/amd64/no-multilib/selinux [24]
 hardened/linux/amd64/x32
 
 but I don't understand the difference between 20 and 24. I thought
 x32 was the path to getting rid of the emul* stuff ? -- Douglas J
 Hunley (doug.hun...@gmail.com mailto:doug.hun...@gmail.com) 
 Twitter: @hunleyd
 Web: about.me/douglas_hunley http://about.me/douglas_hunley G+:
 http://google.com/+DouglasHunley

x32 is the new x86 ABI. 20, is a profile that has all the emul-*
packages masked, so the new multilib ebuilds take precedence. They are
completely different profiles

You probably need to read this:
http://wiki.gentoo.org/wiki/Multilib_System_without_emul-linux_Packages

-- 
Regards,
Markos Chandras



Re: [gentoo-user] Report: Experience with f2fs

2015-02-24 Thread Todd Goodman
* Rich Freeman ri...@gentoo.org [150224 10:19]:
 On Tue, Feb 24, 2015 at 8:11 AM, Todd Goodman t...@bonedaddy.net wrote:
 
  Can you explain why a log-based filesystem like f2fs would have any
  impact on wear leveling?
 
  As I understand it, wear leveling (and bad block replacement) occurs on
  the SSD itself (in the Flash Translation Layer probably.)
 
 
 Well, if the device has a really dumb firmware there is nothing you
 can do to prevent it from wearing itself out.  However, log-based
 filesystems and f2fs in particular are designed to make this very
 unlikely in practice.
 
 Log-based filesystems never overwrite data in place.  Instead all
 changes are appended into empty space, until a large region of the
 disk is full.  Then the filesystem:
 1.  Allocates a new unused contiguous region of the disk (which was
 already trimmed).  This would be aligned to the erase block size on
 the underlying SSD.
 2.  Copies all data that is still in use from the oldest allocated
 region of the disk to the new region.
 3.  Trims the entire old region, which was aligned to the erase block
 size when it was originally allocated.
 
 So, the entire space of the disk is written to sequentially, and the
 head basically eats the tail.  Every block on the drive gets written
 to once before the first block on the drive gets written to twice.
[..SNIP..]

Thanks for the info.

But the device is still doing wear leveling and bad block
replacement so you're beholden to those algorithms and what you think
you're allocating as sequential blocks of the flash are not necessarily so.

Of course any decent wear leveling algorithm is still going to work
fine, but it seems to me like the wear leveling is still occuring in the
device and the filesystem is beneficial for use on flash based devices
for other reasons.

I'm sure I'm still missing something though.

Thanks,

Todd



Re: [gentoo-user] logs in the browser?

2015-02-24 Thread Rich Freeman
On Tue, Feb 24, 2015 at 10:33 AM, Stefan G. Weichinger li...@xunil.at wrote:
 Maybe I could set up some other web-app that (a) looks at the link
 pointing to the postfix.service-logs and (b) filters them?

 I could post to the systemd-devel-ml ... btw ;-)


Seems like there should be a systemd-users mailing list, actually.
This sort of situation is completely distro-agnostic.

You certainly could design such an application.  If you do so I'd
consider pulling the journald logs in JSON format.  I'd also see if
somebody actually has written a journald library/class/etc for your
language of choice - it seems like that is the sort of thing that is
likely to exist soon if not already.  One of the goals journald (and
systemd) is to provide more of an API for services so that there is
less parsing of text files and communicating via signals/etc.  I'm
sure with appropriate permissions a process could just obtain log
entries via dbus, and using cursors poll for new entries (or maybe
there is a push/stream mechanism).

Really though it seems like the solution is a generic log monitor with
rules for such things, with the monitor utilizing the JSON data from
journald for richer metadata/efficiency/accuracy.

-- 
Rich



Re: [gentoo-user] syslog-ng: how to read the log files

2015-02-24 Thread Stroller

On Sun, 22 February 2015, at 11:48 pm, lee l...@yagibdah.de wrote:
 
 I believe this may be bug 406623.
 
 https://bugs.gentoo.org/show_bug.cgi?id=406623
 
 That's almost three years old and should apparently be fixed?
 
 It's only been closed in the last few weeks. 
 
 Still I wonder why it took so long to fix it.

That's hardly unusual - Gentoo is massively understaffed.

Even version bumps may sometimes take weeks to be actioned.

Stroller.




Re: [gentoo-user] Re: [SOLVED] What happened to my 2nd eth0?

2015-02-24 Thread Walter Dnes
On Tue, Feb 24, 2015 at 06:43:19AM +, Mick wrote

 PS.  Did you look at setting your desired subnet rather than a local-link 
 auto-configured address at your HDHomerun device?

  Not yet.  I'm still cleaning up some odds-n-ends of my simple upgrade
from 32-bit to 64-bit mode.  Also, as a matter of principle, I'd like to
learn how to set up multiple routes using the iproute2 suite.  This
looks like a perfect learning opportunity.

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



Re: [gentoo-user] the new ssd, is it happy?

2015-02-24 Thread Bob Wya
Super obvious question... but can you enable AHCI mode for your SATA
Controller - in the BIOS.

Are you using HP supplied SATA cables - because these may be sucky crap. If
so I would try replacing them - especially if they don't have latches on
the plugs.

I think this is the specification for your motherboard chipset:
http://www.intel.com/content/www/us/en/chipsets/mainstream-chipsets/h67-express-chipset.html
So you probably need the decent SATA 3G (6Gbit) cables to get the best
support for your SSD.

I've had some issues with 6Gbit SATA... Basically you are looking at really
high switching speeds - where connector quality makes a huge difference. It
might even be worth cleaning out your SATA connectors with isopropyl
alcohol (99%).




On 24 February 2015 at 20:49, Stefan G. Weichinger li...@xunil.at wrote:


 ordered myself a new and shiny ssd last week.

 one thinkpad still had that 60GB OCZ Vertex3 and that was a bit tight
 now and then.

 So I ordered a Samsung SSD 850 EVO 500GB for my desktop and planned to
 move the former 840 EVO 250GB to the thinkpad.

 Done today.

 Moving was rather *boring* -

 partition ssd, add new partition to btrfs filesystem, remove old
 partition from btrfs filesystem, wait ~15 minutes, in the meantime copy
 over the UEFI ESP to the new disk, install gummiboot there ...

 it booted up at first time ... oh my, what has happened to good old gentoo?

 :-P

 (resized stuff, yes ... but no big blockers anywhere)

 What I would like to discuss now:

 # dmesg  | grep ata1
 [1.930869] ata1: SATA max UDMA/133 abar m2048@0xfb205000 port
 0xfb205100 irq 26
 [2.235852] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
 [2.237378] ata1.00: supports DRM functions and may not be fully
 accessible
 [2.237506] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
 [2.237509] ata1.00: ATA-9: Samsung SSD 850 EVO 500GB, EMT01B6Q, max
 UDMA/133
 [2.237521] ata1.00: 976773168 sectors, multi 1: LBA48 NCQ (depth
 31/32), AA
 [2.237979] ata1.00: supports DRM functions and may not be fully
 accessible
 [2.238071] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
 [2.238166] ata1.00: configured for UDMA/133
 [20207.916327] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
 [20207.916528] ata1.00: supports DRM functions and may not be fully
 accessible
 [20207.916598] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
 [20207.918249] ata1.00: supports DRM functions and may not be fully
 accessible
 [20207.918325] ata1.00: failed to get NCQ Send/Recv Log Emask 0x1
 [20207.918419] ata1.00: configured for UDMA/133


 Are these failed lines ok?

 The box itself is a bit older, a

 Hewlett-Packard HP Elite 7300 Series MT/2AB5, BIOS 7.12

 (I never found a BIOS update! btw ...)

 so maybe the chipset lacks features the SSD might be able to use.

 Everything works fine so far, I would just like to understand if things
 are OK with this new piece of hardware.

 additional:

 Device Model: Samsung SSD 850 EVO 500GB
 Firmware Version: EMT01B6Q

 I did not find any firmware update online, do you agree?

 Thanks, regards, Stefan




-- 

All the best,
Robert


Re: [gentoo-user] Re: [SOLVED] What happened to my 2nd eth0?

2015-02-24 Thread Tom H
On Tue, Feb 24, 2015 at 1:43 AM, Mick michaelkintz...@gmail.com wrote:
 On Monday 23 Feb 2015 08:39:42 Walter Dnes wrote:

 Looks like it's time to play around with the ip command and try to
 duplicate my current setup.  Does anyone have a multi-route setup
 similar to mine configured with iproute2?  The net.example file says

 # If you need more than one address, you can use something like this
 # NOTE: ifconfig creates an aliased device for each extra IPv4 address
 #   (eth0:1, eth0:2, etc)
 #   iproute2 does not do this as there is no need to
 # WARNING: You cannot mix multiple addresses on a line with other
 parameters! #config_eth0=192.168.0.2/24 192.168.0.3/24 192.168.0.4/24
 # However, that only works with CIDR addresses, so you can't use
 # netmask.

   What exactly do they mean by...
 iproute2 does not do this as there is no need to

 There is no need to create virtual interfaces like eth0:1 to be able to have
 secondary IP addresses.  The ip command adds them to the same eth0 interface.

Labels (iproute2 aliases) aren't required but can be useful:

# ip a sh dev wlan0
2: wlan0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether e8:2a:ea:0f:68:ec brd ff:ff:ff:ff:ff:ff
inet 192.168.1.240/24 brd 192.168.1.255 scope global wlan0
   valid_lft forever preferred_lft forever

# ip a add 192.168.1.250/24 brd + label wlan0:0 dev wlan0

# ip a sh dev wlan0
2: wlan0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether e8:2a:ea:0f:68:ec brd ff:ff:ff:ff:ff:ff
inet 192.168.1.240/24 brd 192.168.1.255 scope global wlan0
   valid_lft forever preferred_lft forever
inet 192.168.1.250/24 brd 192.168.1.255 scope global secondary wlan0:0
   valid_lft forever preferred_lft forever

# ip a sh label wlan0:0
inet 192.168.1.250/24 brd 192.168.1.255 scope global secondary wlan0:0
   valid_lft forever preferred_lft forever

# ip a fl label wlan0:0

# ip a sh dev wlan0
2: wlan0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether e8:2a:ea:0f:68:ec brd ff:ff:ff:ff:ff:ff
inet 192.168.1.240/24 brd 192.168.1.255 scope global wlan0
   valid_lft forever preferred_lft forever