Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-02 7:23 GMT+03:00 gevisz :
> 2016-09-01 11:55 GMT+03:00 Frank Steinmetzger :
>> On Thu, Sep 01, 2016 at 11:44:19AM +0300, gevisz wrote:
>>
>>> > Some people do a full systems check (i.e. badblocks) before entrusting a
>>> > drive with anything important.
>>>
>>> It is a good advice! I have already thought of this but I am sorry to
>>> acknowledge that, since the "old good times" of MS DOS 6.22, I never did
>>> this in Linux. :(
>>> […]
>>> So, can you, please, advice me about the program or utility that can do
>>> badblocks check for me?
>>
>> Badblocks is part of e2fsprogs. But since you’re using USB2, this will
>> really take a while. At best I get 39 MB/s out of it. Another way is a
>> S.M.A.R.T. test, methinks `smartctl -t full` is the command for that. But I
>> don’t know what exactly is being tested there. But it runs fully internal of
>> the disk, so no USB2-bottleneck. Others may chime in if I tell fairy tales.
>
> So far, the hard drive passed two (small) smart tests started by commands:
> # smartctl -c -t short -d sat /dev/sdc
> and
> # smartctl -t conveyance -d sat /dev/sdc



> However, after running
> # smartctl -t long -d sat /dev/sdc
> I have no indication that it has been passed:
> # smartctl -t long -d sat /dev/sdc
> smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
> Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
>
> === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
> Sending command: "Execute SMART Extended self-test routine immediately
> in off-line mode".
> Drive command "Execute SMART Extended self-test routine immediately in
> off-line mode" successful.
> Testing has begun.
> Please wait 571 minutes for test to complete.
> Test will complete after Fri Sep  2 04:02:18 2016
>
> Use smartctl -X to abort test.
>
> Fri Sep 2 6:10
> # smartctl -l selftest -d sat /dev/sdc
> smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
> Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
>
> Read Device Identity failed: scsi error device will be ready soon
>
> A mandatory SMART command failed: exiting. To continue, add one or
> more '-T permissive' options.

Well, may be, it has not been finished yet.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 11:55 GMT+03:00 Frank Steinmetzger :
> On Thu, Sep 01, 2016 at 11:44:19AM +0300, gevisz wrote:
>
>> > Some people do a full systems check (i.e. badblocks) before entrusting a
>> > drive with anything important.
>>
>> It is a good advice! I have already thought of this but I am sorry to
>> acknowledge that, since the "old good times" of MS DOS 6.22, I never did
>> this in Linux. :(
>> […]
>> So, can you, please, advice me about the program or utility that can do
>> badblocks check for me?
>
> Badblocks is part of e2fsprogs. But since you’re using USB2, this will
> really take a while. At best I get 39 MB/s out of it. Another way is a
> S.M.A.R.T. test, methinks `smartctl -t full` is the command for that. But I
> don’t know what exactly is being tested there. But it runs fully internal of
> the disk, so no USB2-bottleneck. Others may chime in if I tell fairy tales.

So far, the hard drive passed two (small) smart tests started by commands:
# smartctl -c -t short -d sat /dev/sdc
and
# smartctl -t conveyance -d sat /dev/sdc
as is indicated by
# smartctl -l selftest -d sat /dev/sdc
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_DescriptionStatus  Remaining
LifeTime(hours)  LBA_of_first_error
# 1  Conveyance offline  Completed without error   00% 0 -
# 2  Short offline   Completed without error   00% 0 -

However, after running
# smartctl -t long -d sat /dev/sdc
I have no indication that it has been passed:
# smartctl -t long -d sat /dev/sdc
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately
in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in
off-line mode" successful.
Testing has begun.
Please wait 571 minutes for test to complete.
Test will complete after Fri Sep  2 04:02:18 2016

Use smartctl -X to abort test.

Fri Sep 2 6:10
# smartctl -l selftest -d sat /dev/sdc
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

Read Device Identity failed: scsi error device will be ready soon

A mandatory SMART command failed: exiting. To continue, add one or
more '-T permissive' options.

# smartctl -a -d sat /dev/sdc
smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-gentoo] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model: WDC WD50EZRZ-00GZ5B1
Serial Number:   
LU WWN Device Id: 
Firmware Version: 80.00A80
User Capacity:5,000,981,078,016 bytes [5.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate:5700 rpm
Device is:Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:Fri Sep  2 06:12:50 2016 EEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x80)Offline data collection activity
was never started.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0)The previous self-test
routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (57180) seconds.
Offline data collection
capabilities:  (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003)Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01)Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:  (   2) minutes.
Extended self-test routine
recommended polling time:  ( 571) minutes.
Conveyance self-test routine
recommended polling time:  (   5) minutes.
SCT capabilities:(0x3035)SCT Status supported.
 

[gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread »Q«
On Thu, 1 Sep 2016 22:56:31 +0100
Neil Bothwick  wrote:

> On Thu, 1 Sep 2016 22:08:19 +0200, Kai Krakow wrote:
 
> > Removal of a 4.6 series ebuild also means there would follow no
> > updates  
> 
> Are there any updates to the 4.6 series or was is 4.7 considered its
> successor by the kernels devs? If the former, I can understand your
> point. If the latter, there would be no updates so there is no point.

The latter.  The announcement of that is probably what prompted the
tree-cleaning.





[gentoo-user] Re: Can't create valid btrfs on NVMe

2016-09-01 Thread Kai Krakow
Am Thu, 18 Aug 2016 14:47:07 +0100
schrieb Neil Bothwick :

> On Thu, 18 Aug 2016 09:38:03 -0400, Rich Freeman wrote:
> 
> > This is almost certainly a bug in btrfs-progs, or maybe the btrfs
> > filesystem driver in the kernel.  
> 
> The latter, a later kernel appears to have done the trick.
>  
> > I'd suggest raising this on the btrfs mailing list, where it is
> > going to get a lot more attention from the people who develop
> > btrfs.  There are a few of us who use it around here, but I'd have
> > to spend a day tweaking the btrfs-progs source to have a guess at
> > where this is bailing out.  I suspect somebody over there would
> > have an answer almost immediately.  
> 
> As our resident btrfs expert, I was expecting you to come up with an
> immediate answer ;-)

Have you tried an explicit "btrfs dev scan"? If that helps, problems
maybe arise from the udev rules...

-- 
Regards,
Kai

Replies to list-only preferred.


pgp3m0CmvqxAh.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 22:56:31 +0100
schrieb Neil Bothwick :

> > - so my next upgrade would "force" me into deciding going way down
> > (probably a bad idea) or up into unknown territory (and this showed:
> > can also be a problem). Or I can stay with 4.6 until depclean
> > removed it for good (which will, by the way, remove the files
> > from /usr/src).  
> 
> Depclean won't remove it if you add it to world.

Multi-slot packages ARE removed by depclean except the last stable
version - which jumped backwards for me because I used an ~arch kernel
that was removed from portage.

The following happened:

# emerge -DNua world

Reinstalled 4.4 for me. Problem here: I didn't exactly distinguish that
4.4.something is not 4.6.something in the result list. So I continued
with:

# emerge --depclean -a
# cd /usr/src/linux && make oldconfig && ... the usual stuff

Wow, that went fast. Then, I realized why: I just depcleaned my 4.6 and
4.4 compile objects were still there, I tried to reinstall 4.6,
it failed - of course: The package is no longer in portage.

I thought: Okay, there's probably a reason, let's get to 4.7.2 then -
what should possibly go wrong? It's not 4.7.0 and I still have 4.6
in /boot. Yeah, what should go wrong... I shouldn't have asked. TL;DR:
I restored from backup.

This must be coincidence. I wanted to go to stable kernel at next
opportunity. But forward in version, not backward. ;-)

But let's get back to the point:

Depclean does remove multi-slotted kernel sources. It does it with
every multislot package except there's an explicit slot dependency or
you explicitly mention the slot in the world file.

Since I cannot remember such a surprise-removal* happened anytime
before and put me in such a situation, this was completely new to me.
(10+ years of Gentoo usage)

Lesson learned: Keep your eyes open. Maybe I put the kernel slot
into my world file, with the opposite downside this has. (note to
myself)


*: This is subjective, I know.

-- 
Regards,
Kai

Replies to list-only preferred.


pgp1wCV6yrJTL.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Kai Krakow
Am Fri, 2 Sep 2016 01:53:31 +0200
schrieb Alan McKinnon :

> On 01/09/2016 10:49, gevisz wrote:
> > 2016-09-01 10:30 GMT+03:00 Matthias Hanft :  
> >> gevisz wrote:  
>  [...]  
> >>
> >> If your filesystem becomes corrupt (and you are unable to
> >> repair it), *all* of your data is lost (instead of just
> >> one partition). That's the only disadvantage I can think
> >> of.  
> >
> > That is exactly what I am afraid of!
> >
> > So, the 20-years old rule of thumb is still valid. :(  
> 
> No, it is not valid, and it is not true.
> 
> Data corruption on-disk does not by and large (unless you are very 
> unlucky) corrupt file systems. It corrupts files.
> 
> Secondly, by and large, most people have all the files they really
> care about on one partition, called DATA or similar. Everything else
> except your data can usually be reconstructed, especially the OS
> itself. You probably store all that data in one volume simply because
> it makes logical sense to do so. Data is read and written far more
> than anything else on your disk so if you are unlucky enough to
> suffer volume corruption it's likely to be on a) the biggest volume
> and b) the busiest volume. In both cases it is your data, meaning
> your data is what is exposed to risk and everything else not so much.

This is one of the best points, and very easy to follow. *thumbsup*

> Yes, this is a real factor you mention. It is detectable and 
> measureable. It's also minute and statistically irrelevant if you 
> haven't dealt with environmental factors that cause data damage
> (dodgy ram, cables, psus, over-temps, brownouts). If those things
> happen, and they WILL happen, you are 10-20 times at least more
> likely to lose your data than anything else, no matter how you
> partitioned the disk.

So you can store everything in the same partition anyways. Especially
since Windows doesn't have this distinction that all data should be
where the user thinks it is (on "DATA or similar"). So an important part
of your data is still on the OS drive anyways. Instead, make a backup
of the complete user profile.

But one should take into account: Not only the data has value. Also the
work needed to reconstruct the OS and applications has value. So better
put it in the backup, too. One more point for using one partition.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 23:02:17 +0100
schrieb Neil Bothwick :

> On Thu, 1 Sep 2016 23:50:17 +0200, Kai Krakow wrote:
> 
>  [...]  
> > > 
> > > That's not true. Whoever owns the files and directories will be
> > > able to access then, even if root mounted the stick, just like a
> > > hard drive. If you have the same UID on all your systems, chown -R
> > > youruser: /mount/point will make everything available on all
> > > systems.
> > 
> > As long as uids match...  
> 
> That's what I said, whoever owns the files. As far as Linux is
> concerned, the UID is the person, usernames are just a convenience
> mapping to make life simpler for the wetware.

Oh yes, I was confused... ;-)

After I hit reply, my eyes stopped at "owns the file" and continued at
"chown -R youruser". So if others were confused, too: Now they
shouldn't.

-- 
Regards,
Kai

Replies to list-only preferred.


pgpnUM8vKo_0F.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Alan McKinnon

On 01/09/2016 10:49, gevisz wrote:

2016-09-01 10:30 GMT+03:00 Matthias Hanft :

gevisz wrote:


But what are disadvantages of not partitioning a big
hard drive into smaller logical ones?


If your filesystem becomes corrupt (and you are unable to
repair it), *all* of your data is lost (instead of just
one partition). That's the only disadvantage I can think
of.


That is exactly what I am afraid of!

So, the 20-years old rule of thumb is still valid. :(


No, it is not valid, and it is not true.

Data corruption on-disk does not by and large (unless you are very 
unlucky) corrupt file systems. It corrupts files.


Secondly, by and large, most people have all the files they really care 
about on one partition, called DATA or similar. Everything else except 
your data can usually be reconstructed, especially the OS itself. You 
probably store all that data in one volume simply because it makes 
logical sense to do so. Data is read and written far more than anything 
else on your disk so if you are unlucky enough to suffer volume 
corruption it's likely to be on a) the biggest volume and b) the busiest 
volume. In both cases it is your data, meaning your data is what is 
exposed to risk and everything else not so much.


Yes, this is a real factor you mention. It is detectable and 
measureable. It's also minute and statistically irrelevant if you 
haven't dealt with environmental factors that cause data damage (dodgy 
ram, cables, psus, over-temps, brownouts). If those things happen, and 
they WILL happen, you are 10-20 times at least more likely to lose your 
data than anything else, no matter how you partitioned the disk.








[gentoo-user] Re: guvcview update produces an executable with missing lib...

2016-09-01 Thread Kai Krakow
Am Tue, 30 Aug 2016 03:55:32 +0200
schrieb meino.cra...@gmx.de:

> Daniel Frey  [16-08-30 03:48]:
> > On 08/29/2016 11:11 AM, waltd...@waltdnes.org wrote:  
> > > On Mon, Aug 29, 2016 at 06:29:50PM +0200, meino.cra...@gmx.de
> > > wrote  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > >   The first suggestion in a case like this is to run
> > > revdep-rebuild.  As a matter of fact, it probably wouldn't hurt
> > > to run revdep-rebuild after every update.
> > >   
> > 
> > Yes, I do. Portage occasionally misses a rebuild. It's a lot better
> > now at catching them but it still misses them.
> > 
> > 
> > 
> > Dan
> >   
> 
> Ok, we now know that depclean and redep rebuild are needed after each
> update. It is someting, which I put together into one script which 
> I run after each update.
> 
> And we know that portage seems to be guilty. And that it should not be
> guilty. And that it does better in this cases.
> 
> One thing remains:
> How can I get that guvcview up and running?

Try "emerge -1a @preserved-rebuild". It should catch preserved libs
that revdep-rebuild cannot see at missing but doesn't scan neither
because it doesn't consider them as part of the installed files.

Usually, after rebuilding everything to new metadata, you no longer use
revdep-rebuild because the preserved-libs feature renders it mostly
useless. Instead, @preserved-rebuild jumps in to rebuild and cleanup.

You may, however, catch a situation where the configure phase detects
the new version of a lib but the linker phase uses the preserved lib of
an older version of a dependent package. I'd consider this a bug of the
ebuild or the package's build system usually. The way out here is do
forcefully uninstall the package with the "broken" preserved lib, then
reinstall what is not working.

It'd try this:

# Try to unmerge the package that portage thinks the file belongs to:
$ emerge -Ca /usr/lib/libgviewv4l2core-1.0.so.1
(or whereever this file should be, you can use pfl for finding out)

# rebuild binary broken packages now (we used -C, not -c)
$ revdep-rebuild

# rebuild packages using preserved libs
$ emerge -1a @preserved-rebuild

# rebuild your package now, it should now link to the correct lib
$ emerge -1a guvcview

If that still doesn't help, try "qcheck -BHTP" to find packages with
missing files. Rebuild those:

$ emerge -1a $(qcheck -BHTP)

If that also doesn't help, pretend that no dependencies are installed -
this may rebuild A LOT of packages:

$ emerge -1e guvcview

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Alan McKinnon

On 02/09/2016 00:56, Kai Krakow wrote:

Am Wed, 31 Aug 2016 02:32:24 +0200
schrieb Alan McKinnon :


On 31/08/2016 02:08, Grant wrote:
 [...]
 [...]


You can't control ownership and permissions of existing files with
mount options on a Linux filesystem. See man mount.



So in order to use a USB stick between multiple Gentoo systems with
ext2, I need to make sure my users have matching UIDs/GIDs?


Yes

The uids/gids/modes in the inodes themselves are the owners and perms,
you cannot override them.

So unless you have mode=666, you will need matching UIDs/GIDs (which
is a royal massive pain in the butt to bring about without NIS or
similar


 I think
this is how I ended up on NTFS in the first place.


Didn't we have this discussion about a year ago? Sounds familiar now


 Is there a
filesystem that will make that unnecessary and exhibit better
reliability than NTFS?


Yes, FAT. It works and works well.
Or exFAT which is Microsoft's solution to the problem of very large
files on FAT.

Which NTFS system are you using?

ntfs kernel module? It's quite dodgy and unsafe with writes
ntfs-ng on fuse? I find that one quite solid


ntfs-ng does have an annoyance that has bitten me more than once. When
ntfs-nf writes to an FS, it can get marked dirty. Somehow, when used
in a Windows machine the driver there has issues with the FS. Remount
it in Linux again and all is good.


Well, ntfs-ng simply sets the dirty flag which to Windows means "needs
chkdsk". So Windows complains upon mount that it needs to chkdsk the
drive first. That's all. Nothing bad.


No, that's not it. Read again what I wrote - i have a specific fail mode 
which I don't care to investigate, not the general dirty state flag 
setting you describe






[gentoo-user] Re: guvcview update produces an executable with missing lib...

2016-09-01 Thread Kai Krakow
Am Mon, 29 Aug 2016 12:05:51 -0700
schrieb Daniel Frey :

> On 08/29/2016 11:11 AM, waltd...@waltdnes.org wrote:
> > On Mon, Aug 29, 2016 at 06:29:50PM +0200, meino.cra...@gmx.de
> > wrote  
> >> Hi,
> >>
> >> after updateing my system, beside others guvcview was updated:
> >>
> >> from qlop
> >> Mon Aug 29 17:44:08 2016 >>> media-video/guvcview-2.0.4
> >>
> >> After ldconfig as root and rehash (zsh) as user I got:
> >>  
>  [...]  
> >> guvcview: error while loading shared libraries:
> >> libgviewv4l2core-1.0.so.1: cannot open shared object file: No such
> >> file or directory [1]23848 exit 127   guvcview
> >>
> >>
> >> ...h.
> >>
> >> What did I wrong?
> >>
> >> Best regards,
> >> mcc  
> > 
> >   The first suggestion in a case like this is to run
> > revdep-rebuild.  As a matter of fact, it probably wouldn't hurt to
> > run revdep-rebuild after every update.
> >   
> 
> Yes, I do. Portage occasionally misses a rebuild. It's a lot better
> now at catching them but it still misses them.

I think this mainly happens for very old packages not built in a long
time. The install database simply misses the relevant info. Rebuilding
the package should catch up on this.

I usually see this on old packages installed but not updated for a long
time. "emerge @preserved-rebuild" usually doesn't see them,
revdep-rebuild most of the time sees them but may miss packages which
@preserved-rebuild sees (because revdep-rebuild does only detect
missing binary deps but cannot identify orphaned libs which are still
there but should be removed). And some simply need to rebuild once to
migrate to new installation meta data.

Packages caught in such a way are from then on usually seen by
@preserved-rebuild.

I'm using only "emerge @preserved-rebuild" in a long time now, without
falling back to revdep-rebuild. The latter is quite obsolete to me.

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 6:35 PM, Kai Krakow  wrote:
> Am Tue, 30 Aug 2016 17:59:02 -0400
> schrieb Rich Freeman :
>
>>
>> That depends on the mode of operation.  In journal=data I believe
>> everything gets written twice, which should make it fairly immune to
>> most forms of corruption.
>
> No, journal != data integrity. Journal only ensure that data is written
> transactionally. You won't end up with messed up meta data, and from
> API perspective and with journal=data, a partial written block of data
> will be rewritten after recovering from a crash - up to the last fsync.
> If it happens that this last fsync was half way into a file: Well, then
> there's only your work written upto the half of the file.

Well, sure, but all an application needs to do is make sure it calls
write on whole files, and not half-files.  It doesn't need to fsync as
far as I'm aware.  It just needs to write consistent files in one
system call.  Then that write either will or won't make it to disk,
but you won't get half of a write.

> Journals only ensure consistency on API level, not integrity.

Correct, but this is way better than not journaling or ordering data,
which protects the metadata but doesn't ensure your files aren't
garbled even if the application is careful.

>
> If you need integrity, so then file system can tell you if your file is
> broken or not, you need checksums.
>

Btrfs and zfs fail in the exact same way in this particular regard.
If you call write with half of a file, btrfs/zfs will tell you that
half of that file was successfully written.  But, it won't hold up for
the other half of the file that the kernel hasn't been told about.

The checksumming in these filesystems really only protects data from
modification after it is written.  Sectors that were only half-written
during an outage which have inconsistent checksums probably won't even
be looked at during an fsck/mount, because the filesystem is just
going to replay the journal and write right over them (or to some new
block, still treating the half-written data as unallocated).  These
filesystems don't go scrubbing the disk to figure out what happened,
they just replay the log back to the last checkpoint.  The checksums
are just used during routine reads to ensure the data wasn't somehow
corrupted after it was written, in which case a good copy is used,
assuming one exists.  If not at least you'll know about the problem.

> If you need a way to recover from a half written file, you need a CoW
> file system where you could, by luck, go back some generations.

Only if you've kept snapshots, or plan to hex-edit your disk/etc.  The
solution here is to correctly use the system calls.

>
>> f2fs would also have this benefit.  Data is not overwritten in-place
>> in a log-based filesystem; they're essentially journaled by their
>> design (actually, they're basically what you get if you ditch the
>> regular part of the filesystem and keep nothing but the journal).
>
> This is log-structed, not journalled. You pointed that out, yes, but
> you weakened that by writing "basically the same". I think the
> difference is important. Mostly because the journal is a fixed area on
> the disk, while a log-structured file system has no such journal.

My point was that they're equivalent from the standpoint that every
write either completes or fails and you don't get half-written data.
Yes, I know how f2fs actually works, and this wasn't intended to be a
primer on log-based filesystems.  The COW filesystems have similar
benefits since they don't overwrite data in place, other than maybe
their superblocks (or whatever you call them).  I don't know what the
on-disk format of zfs is, but btrfs has multiple copies of the tree
root with a generation number so if something dies partway it is
really easy for it to figure out where it left off (if none of the
roots were updated then any partial tree structures laid down are in
unallocated space and just get rewritten on the next commit, and if
any were written then you have a fully consistent new tree used to
update the remaining roots).

One of these days I'll have to read up on the on-disk format of zfs as
I suspect it would make an interest contrast with btrfs.

>
> This point was raised because it supports checksums, not because it
> supports CoW.

Sure, but both provide benefits in these contexts.  And the only COW
filesystems are also the only ones I'm aware of (at least in popular
use) that have checksums.

>
> Log structered file systems are, btw, interesting for write-mostly
> workloads on spinning disks because head movements are minimized.
> They are not automatically helping dumb/simple flash translation layers.
> This incorporates a little more logic by exploiting the internal
> structure of flash (writing only sequentially in page sized blocks,
> garbage collection and reuse only on erase block level). F2fs and
> bcache (as a caching layer) do this. Not sure about the 

[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Wed, 31 Aug 2016 02:32:24 +0200
schrieb Alan McKinnon :

> On 31/08/2016 02:08, Grant wrote:
>  [...]  
>  [...]  
> >>
> >> You can't control ownership and permissions of existing files with
> >> mount options on a Linux filesystem. See man mount.  
> > 
> > 
> > So in order to use a USB stick between multiple Gentoo systems with
> > ext2, I need to make sure my users have matching UIDs/GIDs?  
> 
> Yes
> 
> The uids/gids/modes in the inodes themselves are the owners and perms,
> you cannot override them.
> 
> So unless you have mode=666, you will need matching UIDs/GIDs (which
> is a royal massive pain in the butt to bring about without NIS or
> similar
> 
> >  I think
> > this is how I ended up on NTFS in the first place.  
> 
> Didn't we have this discussion about a year ago? Sounds familiar now
> 
> >  Is there a
> > filesystem that will make that unnecessary and exhibit better
> > reliability than NTFS?  
> 
> Yes, FAT. It works and works well.
> Or exFAT which is Microsoft's solution to the problem of very large
> files on FAT.
> 
> Which NTFS system are you using?
> 
> ntfs kernel module? It's quite dodgy and unsafe with writes
> ntfs-ng on fuse? I find that one quite solid
> 
> 
> ntfs-ng does have an annoyance that has bitten me more than once. When
> ntfs-nf writes to an FS, it can get marked dirty. Somehow, when used
> in a Windows machine the driver there has issues with the FS. Remount
> it in Linux again and all is good.

Well, ntfs-ng simply sets the dirty flag which to Windows means "needs
chkdsk". So Windows complains upon mount that it needs to chkdsk the
drive first. That's all. Nothing bad.

> The cynic in me says that Microsoft didn'y implement their own FS spec
> properly whereas ntfs-ng did :-)

Or ntfs-ng simply doesn't trust itself enough while MS trusts itself
too much. Modern Windows kernels almost never set the dirty bit and
instead trust self-healing capabilities of NTFS by using repair
hotspots. By current design, NTFS may be broken at any time while
Windows tells you nothing about it. If the kernel comes across a
defective structure it marks it as a repair hotspot. A background
process repairs these online. If that fails, it is marked for offline
repair which is repaired silently during mount phase. But the dirty
bit? I haven't seen this in a long time (last time was Windows 2003).
Run a chkdsk on an aging Windows installation which has crashed one
or another time. Did you ever see a chkdsk running? No? Then run a
forced chkdsk. Chances are that it will find and repair problems. Run a
non-forced chkdsk: It will only check if there are repair hotspots. If
none are there, it says: Everything fine. It's lying at you.

But still, the papers about NTFS self-healing are quite interesting to
read. It just appears not as mature to me as MS thinks it to be.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Tue, 30 Aug 2016 17:59:02 -0400
schrieb Rich Freeman :

> On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
>  wrote:
> >
> > the journal does not add any data integrity benefits at all. It just
> > makes it more likely that the fs is in a sane state if there is a
> > crash. Likely. Not a guarantee. Your data? No one cares.
> >  
> 
> That depends on the mode of operation.  In journal=data I believe
> everything gets written twice, which should make it fairly immune to
> most forms of corruption.

No, journal != data integrity. Journal only ensure that data is written
transactionally. You won't end up with messed up meta data, and from
API perspective and with journal=data, a partial written block of data
will be rewritten after recovering from a crash - up to the last fsync.
If it happens that this last fsync was half way into a file: Well, then
there's only your work written upto the half of the file. A well
designed application can then handle this (e.g. transactional
databases). But your carefully written thesis may still be broken.
Journals only ensure consistency on API level, not integrity.

If you need integrity, so then file system can tell you if your file is
broken or not, you need checksums.

If you need a way to recover from a half written file, you need a CoW
file system where you could, by luck, go back some generations.

> f2fs would also have this benefit.  Data is not overwritten in-place
> in a log-based filesystem; they're essentially journaled by their
> design (actually, they're basically what you get if you ditch the
> regular part of the filesystem and keep nothing but the journal).

This is log-structed, not journalled. You pointed that out, yes, but
you weakened that by writing "basically the same". I think the
difference is important. Mostly because the journal is a fixed area on
the disk, while a log-structured file system has no such journal.

> > If you want an fs that cares about your data: zfs.
> >  
> 
> I won't argue that the COW filesystems have better data security
> features.  It will be nice when they're stable in the main kernel.

This point was raised because it supports checksums, not because it
supports CoW.

Log structered file systems are, btw, interesting for write-mostly
workloads on spinning disks because head movements are minimized.
They are not automatically helping dumb/simple flash translation layers.
This incorporates a little more logic by exploiting the internal
structure of flash (writing only sequentially in page sized blocks,
garbage collection and reuse only on erase block level). F2fs and
bcache (as a caching layer) do this. Not sure about the others.

Fully (and only fully) journalled file systems are a little similar for
write-mostly workloads because head movements stay within the journal
area but performance goes down as soon as the journal needs to be
spooled to permanent storage.

I think xfs spreads the journal across the storage space along with the
allocation groups (thus better exploiting performance on jbod RAIDs and
RAID systems that do not stripe diagonally already). It may thus also be
an option for thumb drives. But it is known to prefer killing your
complete file contents of what you just saved for security reasons
during unclean shutdowns / unmounts. But it is pretty rock solid. And
it will gain CoW support in the future which probably eliminates the
kill-the-contents issue, even supporting native snapshotting then.


-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 23:50:17 +0200, Kai Krakow wrote:

> > > ext2 will work, but you'll have to mount it or chmod -R 0777, or
> > > only root will be able to access it.
> > 
> > That's not true. Whoever owns the files and directories will be able
> > to access then, even if root mounted the stick, just like a hard
> > drive. If you have the same UID on all your systems, chown -R
> > youruser: /mount/point will make everything available on all
> > systems.  
> 
> As long as uids match...

That's what I said, whoever owns the files. As far as Linux is
concerned, the UID is the person, usernames are just a convenience
mapping to make life simpler for the wetware.


-- 
Neil Bothwick

deja noo - reminds you of the last time you visited Scotland


pgpqnp0MYGuwf.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Tue, 30 Aug 2016 22:27:46 +0200
schrieb Volker Armin Hemmann :

> Am 30.08.2016 um 21:14 schrieb J. Roeleveld:
> > On August 30, 2016 8:58:17 PM GMT+02:00, Volker Armin Hemmann
> >  wrote:  
> >> Am 30.08.2016 um 20:12 schrieb Alan McKinnon:  
>  [...]  
>  [...]  
>  [...]  
> >> first  
>  [...]  
>  [...]  
> >> includes  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> >> because exfat does not work across gentoo systems. ext2 does.  
> > Exfat works when the drivers are installed.
> > Same goes for ext2.
> >
> > It is possible to not have support for ext2/3 or 4 and still have a
> > fully functional system. (Btrfs or zfs for the full system for
> > instance)
> >
> > When using UEFI boot, a vfat partition with support is required.
> >
> > --
> > Joost  
> 
> ext2 is on every system

Not on mine...

> , exfat not. ext2 is very stable, tested and
> well aged. exfat is some fuse something crap. New, hardly tested and
> unstable as it gets.
> 
> And why use exfat if you use linux? It is just not needed at all.

I consider ext2 not suitable for USB drives because it has no journal
and can break horribly if accidentally removed without unmounting (or
pulling before everything was written).

OTOH, I recommend against using filesystems with a fixed journal area
on thumb drives. Some may be optimized for NTFS usage.

A log structured filesystem (like f2fs, nilfs2) or one with wandering
journals (like reiserfs) may be best - tho I cannot speak for them
regarding accidental disconnects without unmounting first. One should
test it. Reiserfs3 worked very well for me, much better than ext[23],
when I once had to fight with a failing RAID controller (it just went
offline). All reiserfs could be recovered by fsck, only a few files had
wrong checksums, a few were missing - that system (different hardware
of course) is still in use today but converted over to xfs because
reiserfs is a really bad performer for parallel access. Ext[23] was
totally borked, fsck had no chance to recover anything. I'd consider
reiserfs3 mature.

Personally, I tend to using f2fs on thumb drives. Nilfs2 may be an
option, too. But I never used that. I have no experience with f2fs on
failing hardware, tho. But in the end: thumb drives aren't for
important data anyway. So what counts is getting the best life time out
of them.

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 23:06:56 +0200, Kai Krakow wrote:

> If you want to use it as a backup, and it's external (which it should
> be), by all means: partition it. It acts as a protection layer against
> silly OSes that may simply wipe data at the beginning (maybe by
> accident) because there is no partition and they cannot detect that
> there is data stored on the disk, and this destroy your fs superblock -
> which is what you don't want.

That's an excellent point. I've just started using unpartitioned disks
with btrfs, but they are safely tucked away inside my computer. I can see
how having no partition table on an external disk is asking for trouble.


-- 
Neil Bothwick

Data to Picard: 'No, Captain, I do NOT run WINDOWS!'


pgplIm7C9jREq.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 22:08:19 +0200, Kai Krakow wrote:

> > No one forced you to do anything. You 4.6 kernel was still in boot,
> > your 4.6 sources were still installed. The ebuild was only removed
> > fro the portage tree, nothing was uninstalled from your system unless
> > you did it. Even the ebuild was still on your computer
> > in /var/db/pkg.  
> 
> Of course nobody forced me. I just can't follow how the 4.7 ebuild
> kind-of replaced the 4.6 (and others) ebuild in face of this pretty
> mature oom-killer problem.
> 
> Removal of a 4.6 series ebuild also means there would follow no updates

Are there any updates to the 4.6 series or was is 4.7 considered its
successor by the kernels devs? If the former, I can understand your
point. If the latter, there would be no updates so there is no point.

> - so my next upgrade would "force" me into deciding going way down
> (probably a bad idea) or up into unknown territory (and this showed:
> can also be a problem). Or I can stay with 4.6 until depclean removed
> it for good (which will, by the way, remove the files from /usr/src).

Depclean won't remove it if you add it to world. Or you can add this
to /etc/portage/sets.conf to prevent depclean removing any kernels

[kernels]
class = portage.sets.dbapi.OwnerSet
world-candidate = False
files = /usr/src

Running old or out of date kernels is not an issue with Gentoo. I had a
machine running the same kernel for at least a year, long after it was
removed from the tree, because I had some hardware for which the driver
wouldn't compile with newer kernels.


-- 
Neil Bothwick

[unwieldy legal disclaimer would go here - feel free to type your own]


pgpBTBBqUi0BV.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Tue, 30 Aug 2016 08:38:26 +0100
schrieb Neil Bothwick :

> On Tue, 30 Aug 2016 06:46:54 +0100, Mick wrote:
> 
> > > So I'm done with NTFS forever.  Will ext2 somehow allow me to use
> > > the USB stick across Gentoo systems without permission/ownership
> > > problems?
> > > 
> > > - Grant
> > 
> > ext2 will work, but you'll have to mount it or chmod -R 0777, or
> > only root will be able to access it.  
> 
> That's not true. Whoever owns the files and directories will be able
> to access then, even if root mounted the stick, just like a hard
> drive. If you have the same UID on all your systems, chown -R
> youruser: /mount/point will make everything available on all systems.

As long as uids match...

-- 
Regards,
Kai

Replies to list-only preferred.


pgpsHTMff1Pbl.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 22:04:22 +0300, gevisz wrote:

> > LVM is neither encrypted nor compressed. The filesystems on it are no
> > different to the filesystems on physical partitions, and subject to
> > the same risks. An LVM logical volume is just a block device that is
> > treated the same as a physical partition on a non-LVM setup.  
> 
> Thank you for the explanation, I have also just refreshed my memory
> about LVM before replying to you but still can not see any reason why
> I may need LVM on an external hard drive...

You gave on in the post that I replied to suggesting LVM in the first
place - that's why I suggested it.

You were worrying about the difficulty of altering a partition layout
once it is committed to disk and filled with data. LVM removes that
problem, because volumes and filesystems can be resized, added and
deleted at will.

However, at no point did I state that you "need" it, only that it may be
useful. The location of the drive is less relevant that its capacity when
considering this.


-- 
Neil Bothwick

A clean desk is a sign of a cluttered desk drawer.


pgp4KApoZXOVf.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Kai Krakow
Am Mon, 29 Aug 2016 17:51:19 -0700
schrieb Grant :

> > # mount -o loop,ro -t ntfs usb.img /mnt/usbstick
> > NTFS signature is missing.
> > Failed to mount '/dev/loop0': Invalid argument
> > The device '/dev/loop0' doesn't seem to have a valid NTFS.
> > Maybe the wrong device is used? Or the whole disk instead of a
> > partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
> >
> > How else can I get my file from the ddrescue image of the USB stick?
> >
> > - Grant  
> 
> 
> Ah, I got it, I just needed to specify the offset when mounting.
> Thank you so much everyone.  Many hours of work went into the file I
> just recovered.
> 
> So I'm done with NTFS forever.  Will ext2 somehow allow me to use the
> USB stick across Gentoo systems without permission/ownership problems?

Long story short: Do not put important files on USB thumb drives. They
are known to break unexpected and horribly. They even offer silent data
corruption as a hidden feature if stored away for a few weeks or 1-2
years without ever connecting them.

By the way: Many thumb drives are internally optimized to FAT and NTFS
usage - putting anything else on them puts more stress on the internal
flash transition layer, which is most of the time very simple (some
drives only do wear leveling where the FAT tables usually are).

So using NTFS was probably not your worst decision. Ext2 (or even worse
ext3 due to its journal) may very well destroy your thumb drive faster.

I was once able to destroy a cheap thumb drive within two weeks by
putting something else on it than FAT32, and wrote some multiple 10 GBs
to it constantly in small blocks. Now it has unusable blocks spread all
over its storage space. I cannot format anything else to it than FAT32
now. I don't use it any longer. It no longer reliable stores files.

Most thumb drives also need to refresh their cells internally, this is
part of a maintenance process which runs while they are connected. So,
you even cannot use them for archive storage. Thumb drives are for
temporary storage only, to transport files. But never use them as a
single copy of important data.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 12:09:09 +0300
schrieb gevisz :

> 2016-09-01 11:54 GMT+03:00 Neil Bothwick :
> > On Thu, 1 Sep 2016 11:49:43 +0300, gevisz wrote:
> >  
>  [...]  
> >>
> >> That is exactly what I am afraid of!
> >>
> >> So, the 20-years old rule of thumb is still valid. :(
> >>  
>  [...]  
> >>
> >> And this is exactly the reason why I do not want to partition
> >> my new hard drive! :)  
> >
> > Have you considered LVM? You get the benefits of separate
> > filesystems without the limitations of inflexible partitioning.  
> 
> I am afraid of LVM because of the same reason as described below:
> 
> returning to the "old good times" of MS DOS 6.22, I do remember that
> working then on 40MB (yes, megabytes) hard drive I used some program
> that compressed all the data before saving them on that hard drive.
> Unfortunately, one day, because of the corruption, I lost all the
> data on that hard drive. Since then, I am very much afraid of
> compressed or encrypted hard drives.

You are talking of software like Double Disk or what it was called.
This is a complete different scenario, it was a virtual filesystem
inside a filesystem. You layered two indirections on top of each other.
Chances were, if the top layer broke somewhere, the lower layer became
inaccessible. You have been hit by this problem.

LVM works different. It allocates huge blocks as virtual partitions, and
doesn't indirect each single fs-level block in a fine-grained
structure. Of course, there's still the chance that the descriptive
block of LVM can become corrupted - but so can your partition table.
The solution is simple: dump the LVM configuration block that holds the
on-disk structure - then you can replay it. The same, as you would do
with partition tables: Back them up to replay them.

Backup the whole fs if data is important. If you are low on storage but
retention is important, I'd recommend borgbackup.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 09:03:17 +0100
schrieb Neil Bothwick :

> On Thu, 1 Sep 2016 10:18:29 +0300, gevisz wrote:
> 
> > > it will take about 5 seconds to partition it.
> > > And a few more to mkfs it.
> > 
> > Just to partition - may be, but I very much doubt
> > that it will take seconds to create a full-fledged
> > ext4 file system on these 5TB via USB2 connention.  
> 
> Even if that were the case, does it really matter? You said you
> wanted to use this drive for backups, surely doing it right is more
> important than doing it quickly. It's not like you have to hold its
> hand while mkfs is running.

If you want to use it as a backup, and it's external (which it should
be), by all means: partition it. It acts as a protection layer against
silly OSes that may simply wipe data at the beginning (maybe by
accident) because there is no partition and they cannot detect that
there is data stored on the disk, and this destroy your fs superblock -
which is what you don't want.

History shows, that in case of disaster, you may attach your disk to
some recovery environment, just to find: your backup has been destroyed
now by accident.

Or someone else attaches this drive (out of curiosity or whatever) to
some silly OS, it asks "do you want to initialize this drive?" - "ah,
yes, of course sir". Bam. Gone. Not good. With a partition table this
does not happen.

PS: Yes, I mean Windows.

-- 
Regards,
Kai

Replies to list-only preferred.


pgp_KOjiyrDRe.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread Alan McKinnon

On 01/09/2016 22:08, Kai Krakow wrote:

Am Tue, 30 Aug 2016 08:47:22 +0100
schrieb Neil Bothwick :


On Tue, 30 Aug 2016 08:34:55 +0200, Kai Krakow wrote:


Surprise surprise, 4.7 has this (still not fully fixed) oom-killer
bug. When I'm running virtual machines, it still kicks in. I wanted
to stay on 4.6.x until 4.8 is released, and only then switch to
4.7. Now I was forced early (I'm using btrfs), and was instantly
punished by doing so:


No one forced you to do anything. You 4.6 kernel was still in boot,
your 4.6 sources were still installed. The ebuild was only removed
fro the portage tree, nothing was uninstalled from your system unless
you did it. Even the ebuild was still on your computer in /var/db/pkg.


Of course nobody forced me. I just can't follow how the 4.7 ebuild
kind-of replaced the 4.6 (and others) ebuild in face of this pretty
mature oom-killer problem.

Removal of a 4.6 series ebuild also means there would follow no updates
- so my next upgrade would "force" me into deciding going way down
(probably a bad idea) or up into unknown territory (and this showed:
can also be a problem). Or I can stay with 4.6 until depclean removed
it for good (which will, by the way, remove the files from /usr/src).

I think masking had been a much more fair option, especially because
portage has means of displaying me the reasoning behind masking it.

In the end, I simply was really unprepared for this - and this is
usually not how Gentoo works and always worked for me. I'm used to
Gentoo doing better.

Even if the 4.6 series were keyworded - in case of kernel packages they
should not be removed without masking first. I think a lot of people
like to stay - at least temporary - close to kernel mainline because
they want to use the one or other feature.

And then my workflow is always like this: If an ebuild is removed, it's
time to also remove it from my installation and replace it with another
version or an alternative. I usually do this during the masking phase.



Was the ebuild removed from arch or ~arch?

If arch, then you have a point.
If ~arch, then you don't have a point. Gentoo has pretty much always 
expected you to deal with $WHATEVER_HAPPENS on ~arch. There has never 
been a guarantee (not even a loose one) that anything will ever stick 
around in ~arch.


Alan



[gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 09:04:53 +0300
schrieb gevisz :

> I have bought an external 5TB Western Digital hard drive
> that I am going to use mainly for backing up some files
> in my home directory and carrying a very big files, for
> example a virtual machine image file, from one computer
> to another. This hard drive is preformatted with NTFS.
> Now, I am going to format it with ext4 which probably
> will take a lot of time taking into account that it is
> going to be done via USB connection. So, before formatting
> this hard drive I would like to know if it is still
> advisable to partition big hard drives into smaller
> logical ones.
> 
> For about 20 last years, following an advice of my older
> colleague, I always partitioned all my hard drives into
> the smaller logical ones and do very well know all
> disadvantages of doing so. :)

This has been a bad advice for at least the last 15 years when the last
DOS-based machines died.

The reasoning behind this:

Hard drives are really bad at performance after the first third of
storage space (do a benchmark, transfer speed will almost half).

Next, how do you decide in front how big a partition should be? Your OS
partition will become too small one day or another - your are going to
put big files (swap files, program files) into the other partition. See
previous point: This is the slow one.

By this process, you will now artificially put a big gap into OS
related files - this clearly counterfeits your original intention of
keeping OS files close together.

Most current OSes are good at keeping related files close together
(except maybe Windows after a few Windows Updates runs, but there's
software like MyDefrag to fix this and restore original performance),
or there's technology to mitigate this issue (like bcache in Linux). I
think even Windows has an optimization of allocating swap space nearby
the heads current position, so swap fragmentation isn't even an issue.

The advice which I was always given and refused, since more than 15
years:

"But if you reinstall, you then don't have to restore all your
data, and settings, and you can even install your programs to
the other partition to not loose data and programs..."

Sorry, buit. If you expect this to work (at least on Windows, but
that's were the example is from), you will be really disappointed if
you relied on that in case of a disaster: Windows simply stored all
your settings and secret program data files on its C drive - which is
gone. The installed programs are not there or do not work because
Windows simply has no knowledge of them in the other partition after
you reinstall, and even when you manage to start/reinstall them: Their
state is kinda unknown or reset because "ProgramData" is missing. So
this setup is a complete waste of performance and time. And there's no
easy way to fix this. Tricks like symlinking C:\Users to another drive
or use a submount are unsupported and updates will eventually fail to
do this.

I selected Windows here as the example because I expect the advice you
mentioned comes from Windows installations.

Linux, by design, works a lot better here. But still my advice is:
Never ever partition for this reasoning. Even less, if performance is
your concern.

> But what are disadvantages of not partitioning a big
> hard drive into smaller logical ones?

Usually, none. At least for ordinary usage. Performance-wise it's
always a better choice to use multiple physical disks if you need
different partitions. A valid reason for separate partitions (read:
physical drives) is special purpose software like a doctor's office
software which puts all its data and shares below a single directory
structure.

> Is it still advisable to partition a big hard drive
> into smaller logical ones and why?

No. If you want it for logical management, there are much better ways of
achieving this (like fs-integrated pooling, LVM, separate physical
drives selected for their special purpose).

Regarding performance:

I wish Linux had options to relocate files (not just defragment) back
into logical groups for nearby access. Fragmentation is less of a
problem, the bigger problem is data block dislocation over time due to
updates. In Windows, there's the wonderful tool MyDefrag which does
magic and puts your aging Windows installation back into a state of an
almost fresh installation by relocating files to sane positions.

Is there anything similar for Linux?


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Kai Krakow
Am Thu, 1 Sep 2016 22:56:06 +0300
schrieb gevisz :

> > Backups are annoying.  
> 
> Yes. :)

No, try borgbackup with a cronjob.

> >  I don't do them as well as ideally I should  
> 
> Who does? :)

I do.

> Well, probably, one who just lost a lot of data because of not doing
> backup. :)

Data that you do not backup is unimportant data - by definition. It's
that simple. ;-)

You also care about your money and put it on a bank account, keeping
your PIN secret, and neither let money or PIN lying around for everyone
to grab.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread Kai Krakow
Am Sun, 21 Aug 2016 05:55:06 -0400
schrieb Rich Freeman :

> On Sun, Aug 21, 2016 at 5:12 AM, Peter Humphrey
>  wrote:
> >
> > After this morning's sync, both versions 4.4.6 and 4.6.4 of
> > gentoo-sources have disappeared. Is this just finger trouble in the
> > server chain? I get the same with UK and US sync servers.
> >  
> 
> No idea, but upstream is up to 4.4.19, and 4.6.7 (which is now EOL).
> So, those are pretty old versions.  I see 4.4.19 in the Gentoo repo,
> and 4.7.2 (which is probably where 4.6 users should be moving to).
 ^...
No, until the oom-killer bug has been completely resolved. Still
kicking in for me, killing my Chromium tabs while using VirtualBox. And
I'm having 16GB of RAM and the killer kicks in while the kernel still
reports 50% free.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: What's happened to gentoo-sources?

2016-09-01 Thread Kai Krakow
Am Tue, 30 Aug 2016 08:47:22 +0100
schrieb Neil Bothwick :

> On Tue, 30 Aug 2016 08:34:55 +0200, Kai Krakow wrote:
> 
> > Surprise surprise, 4.7 has this (still not fully fixed) oom-killer
> > bug. When I'm running virtual machines, it still kicks in. I wanted
> > to stay on 4.6.x until 4.8 is released, and only then switch to
> > 4.7. Now I was forced early (I'm using btrfs), and was instantly
> > punished by doing so:  
> 
> No one forced you to do anything. You 4.6 kernel was still in boot,
> your 4.6 sources were still installed. The ebuild was only removed
> fro the portage tree, nothing was uninstalled from your system unless
> you did it. Even the ebuild was still on your computer in /var/db/pkg.

Of course nobody forced me. I just can't follow how the 4.7 ebuild
kind-of replaced the 4.6 (and others) ebuild in face of this pretty
mature oom-killer problem.

Removal of a 4.6 series ebuild also means there would follow no updates
- so my next upgrade would "force" me into deciding going way down
(probably a bad idea) or up into unknown territory (and this showed:
can also be a problem). Or I can stay with 4.6 until depclean removed
it for good (which will, by the way, remove the files from /usr/src).

I think masking had been a much more fair option, especially because
portage has means of displaying me the reasoning behind masking it.

In the end, I simply was really unprepared for this - and this is
usually not how Gentoo works and always worked for me. I'm used to
Gentoo doing better.

Even if the 4.6 series were keyworded - in case of kernel packages they
should not be removed without masking first. I think a lot of people
like to stay - at least temporary - close to kernel mainline because
they want to use the one or other feature.

And then my workflow is always like this: If an ebuild is removed, it's
time to also remove it from my installation and replace it with another
version or an alternative. I usually do this during the masking phase.

-- 
Regards,
Kai

Replies to list-only preferred.


pgpobN_vaAsJW.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 22:12 GMT+03:00 Rich Freeman :
> On Thu, Sep 1, 2016 at 2:58 PM, gevisz  wrote:
>> 2016-09-01 14:55 GMT+03:00 Rich Freeman :
>>
>>> 2. Set it up as an LVM partition.  Unless you're using filesystems
>>> like zfs/btrfs that have their own way of doing volume management,
>>> this just makes things less painful down the road.
>>>
>>> 3. I'd probably just set it up as one big logical volume, unless you
>>> know you don't need all the space and you think you might use it for
>>> something else later.  You can change your mind on this with ext4+lvm
>>> either way, but better to start out whichever way seems best.
>>
>> I had to refresh my memory about LVM before replying to you
>> but still can not see why I may need LVM on an external
>> hard drive...
>
> It just gives you more options in the future,

Yes, thank you.

> it is easy to move LVM volumes to other drives, re-partition them later,
> and so on.

I still suspect that this extra level of complexity can complicate recovery
of the data, if anything happens to the disk under LVM management (except
for stealing the hard drive, of course :).

>  I agree it is probably overkill on a removable device, but it doesn't hurt.
> This is a 5TB drive after all.  But, I don't think it is super-critical 
> either.
>
>>
>>> It will take you all of 30 seconds to format this, unless you're
>>> running badblocks (which almost nobody does, because...
>>
>> it takes too much time?
>>
>> I currently running a smart test on it, and it promised to take
>> 10 hours to complete...
>
> That's basically it.  If it didn't take time people would of course
> run it first.  I think a SMART test would be about as good and likely
> a lot faster.  However, the drive should be managing bad blocks on its
> own (granted, many drives seem to get that wrong in my experience,
> which is part of why I run btrfs, but I probably wouldn't use
> btrfs/zfs for a drive you're moving all over the place since who knows
> what kind of kernel you'll have when you use it and heaven help you if
> you ever need to read it on Windows).

It is not a question of using the disk with Windows, but I too often see
some reports about problems in using btrfs on this list to try using it
myself...

>>> You seem to be concerned about losing data.  You should be.  This is a
>>> physical storage device.  You WILL lose everything stored on it at
>>> some point in time.
>>
>> Last time, I have managed to restore all the data from my 2.5" hard
>> drive that suddenly died about 7 years ago and hope to do it again
>> if any. :)
>
> Well, if the data is redundant then you're fine (it is essentially
> already backed up).

No, that data was not backed up.

But I am not guaranteed to be so lucky again, of course. :(

That is why I decided to finally start to back up my data. :)

>  But, you should check those backups from time to time.
>
> You should never rely on the ability to recover data from a hard
> drive.  For starters, if you just lose the thing (portable things can
> sometimes grow legs; you're talking about 5 libraries of congress in a
> bag that could get stolen) or it is catastrophically destroyed that
> isn't going to work.  Short of that there is a fair chance you can get
> a lot of data off the drive, and it is fairly likely if you're using
> some kind of expensive recovery service, but you can't promise that
> the specific file you care about most will get recovered.
>
> Backups are annoying.

Yes. :)

>  I don't do them as well as ideally I should

Who does? :)

Well, probably, one who just lost a lot of data because of not doing backup. :)

> (way too much data to get it all offsite), but I make a conscious
> decision about what does/doesn't get backed up and how.  I
> occasionally restore my encrypted cloud backups to confirm they
> contain what I expect them to.  I actually get the log summary emailed
> daily to make sure it is running (if I had more hosts I could use some
> kind of monitoring for that...).  I've never needed to use the online
> cloud backups, but they're there for a reason and they cover anything
> I actually care about (documents and such).  I also backup all my
> cloud services (evernote, google drive, etc) to local storage
> occassionally; that doesn't require further backup since it is the
> backup.  You just need two copies of everything, with one copy
> preferably being inaccessible from the other and not at the same
> physical site.

Well, thank you for your advices.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 15:51 GMT+03:00 Michael Mol :
>
> On Thursday, September 01, 2016 12:09:09 PM gevisz wrote:
>> 2016-09-01 11:54 GMT+03:00 Neil Bothwick :
>> > On Thu, 1 Sep 2016 11:49:43 +0300, gevisz wrote:
>> >> > If your filesystem becomes corrupt (and you are unable to
>> >> > repair it), *all* of your data is lost (instead of just
>> >> > one partition). That's the only disadvantage I can think
>> >> > of.
>> >>
>> >> That is exactly what I am afraid of!
>> >>
>> >> So, the 20-years old rule of thumb is still valid. :(
>> >>
>> >> > I don't like partitions either (after some years, I
>> >> > always found that sizes don't match my requirements any
>> >> > more),
>> >>
>> >> And this is exactly the reason why I do not want to partition
>> >> my new hard drive! :)
>> >
>> > Have you considered LVM? You get the benefits of separate filesystems
>> > without the limitations of inflexible partitioning.
>>
>> I am afraid of LVM because of the same reason as described below:
>>
>> returning to the "old good times" of MS DOS 6.22, I do remember that working
>> then on 40MB (yes, megabytes) hard drive I used some program that
>> compressed all the data before saving them on that hard drive.
>> Unfortunately, one day, because of the corruption, I lost all the data on
>> that hard drive. Since then, I am very much afraid of compressed or
>> encrypted hard drives.
>
> LVM doesn't *need* to do any of that. It will only do as much as you tell it
> to do. If you only want to use it as a way of reshaping relatively simple
> partitions, you can use it for that.
>
> Honestly, I tend not to create separate partitions for separate mount points
> these days. At least, not on personal systems. For servers, it's can be
> beneficial to have /var separate from /, or /var/log separate from /var, or
> /var/spool, or /var/lib/mysql, or what have you. But the biggest driver for
> that, IME, is if one of those fills up, it can't take down the rest of the
> host.
>
> In your case, I'd suggest using a single / filesystem. If it works, it works.
> If it doesn't, you'll know in the future where you need to be more flexible;
> there's no single panacea.

Thank you for the reply. And I even agree with you to the point that
on a Linux desktop it may be enough to have just 3 different partitions:
one - for /, second - for swap (yes, one can do without it nowadays),
and third - for /home. But you probably missed the point that it goes
about an external drive dedicated to backups only.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 2:58 PM, gevisz  wrote:
> 2016-09-01 14:55 GMT+03:00 Rich Freeman :
>
>> 2. Set it up as an LVM partition.  Unless you're using filesystems
>> like zfs/btrfs that have their own way of doing volume management,
>> this just makes things less painful down the road.
>>
>> 3. I'd probably just set it up as one big logical volume, unless you
>> know you don't need all the space and you think you might use it for
>> something else later.  You can change your mind on this with ext4+lvm
>> either way, but better to start out whichever way seems best.
>
> I had to refresh my memory about LVM before replying to you
> but still can not see why I may need LVM on an external
> hard drive...

It just gives you more options in the future, it is easy to move LVM
volumes to other drives, re-partition them later, and so on.  I agree
it is probably overkill on a removable device, but it doesn't hurt.
This is a 5TB drive after all.  But, I don't think it is
super-critical either.

>
>> It will take you all of 30 seconds to format this, unless you're
>> running badblocks (which almost nobody does, because...
>
> it takes too much time?
>
> I currently running a smart test on it, and it promised to take
> 10 hours to complete...

That's basically it.  If it didn't take time people would of course
run it first.  I think a SMART test would be about as good and likely
a lot faster.  However, the drive should be managing bad blocks on its
own (granted, many drives seem to get that wrong in my experience,
which is part of why I run btrfs, but I probably wouldn't use
btrfs/zfs for a drive you're moving all over the place since who knows
what kind of kernel you'll have when you use it and heaven help you if
you ever need to read it on Windows).

>
>> You seem to be concerned about losing data.  You should be.  This is a
>> physical storage device.  You WILL lose everything stored on it at
>> some point in time.
>
> Last time, I have managed to restore all the data from my 2.5" hard
> drive that suddenly died about 7 years ago and hope to do it again
> if any. :)

Well, if the data is redundant then you're fine (it is essentially
already backed up).  But, you should check those backups from time to
time.

You should never rely on the ability to recover data from a hard
drive.  For starters, if you just lose the thing (portable things can
sometimes grow legs; you're talking about 5 libraries of congress in a
bag that could get stolen) or it is catastrophically destroyed that
isn't going to work.  Short of that there is a fair chance you can get
a lot of data off the drive, and it is fairly likely if you're using
some kind of expensive recovery service, but you can't promise that
the specific file you care about most will get recovered.

Backups are annoying.  I don't do them as well as ideally I should
(way too much data to get it all offsite), but I make a conscious
decision about what does/doesn't get backed up and how.  I
occasionally restore my encrypted cloud backups to confirm they
contain what I expect them to.  I actually get the log summary emailed
daily to make sure it is running (if I had more hosts I could use some
kind of monitoring for that...).  I've never needed to use the online
cloud backups, but they're there for a reason and they cover anything
I actually care about (documents and such).  I also backup all my
cloud services (evernote, google drive, etc) to local storage
occassionally; that doesn't require further backup since it is the
backup.  You just need two copies of everything, with one copy
preferably being inaccessible from the other and not at the same
physical site.

-- 
Rich



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 15:21 GMT+03:00 Neil Bothwick :
> On Thu, 1 Sep 2016 12:09:09 +0300, gevisz wrote:
>
>> > Have you considered LVM? You get the benefits of separate filesystems
>> > without the limitations of inflexible partitioning.
>>
>> I am afraid of LVM because of the same reason as described below:
>>
>> returning to the "old good times" of MS DOS 6.22, I do remember that
>> working then on 40MB (yes, megabytes) hard drive I used some program
>> that compressed all the data before saving them on that hard drive.
>> Unfortunately, one day, because of the corruption, I lost all the data
>> on that hard drive. Since then, I am very much afraid of compressed or
>> encrypted hard drives.
>
> LVM is neither encrypted nor compressed. The filesystems on it are no
> different to the filesystems on physical partitions, and subject to the
> same risks. An LVM logical volume is just a block device that is treated
> the same as a physical partition on a non-LVM setup.

Thank you for the explanation, I have also just refreshed my memory
about LVM before replying to you but still can not see any reason why
I may need LVM on an external hard drive...

> Sp far, you have come up with reasons, good or otherwise, for not taking
> each of the available choices. You need to decide what you really need
> and what is important to you. Only then can you decide on the best
> arrangement for your needs.
>
> Neil Bothwick
>
> Evolution stops when stupidity is no longer fatal!

:)



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 14:55 GMT+03:00 Rich Freeman :
> On Thu, Sep 1, 2016 at 2:04 AM, gevisz  wrote:
>>
>> Is it still advisable to partition a big hard drive
>> into smaller logical ones and why?
>>
>
> Assuming this is only used on Linux machines (you mentioned moving
> files around), here is what I would do:
>
> 1. Definitely create a partition table.  Yes, I know some like to
> stick filesystems on raw drives, but you're basically going to fight
> all the automation in existence if you do this.

I will do it with gparted and guess that it will create a partition table
for me anyway.

> 2. Set it up as an LVM partition.  Unless you're using filesystems
> like zfs/btrfs that have their own way of doing volume management,
> this just makes things less painful down the road.
>
> 3. I'd probably just set it up as one big logical volume, unless you
> know you don't need all the space and you think you might use it for
> something else later.  You can change your mind on this with ext4+lvm
> either way, but better to start out whichever way seems best.

I had to refresh my memory about LVM before replying to you
but still can not see why I may need LVM on an external
hard drive...

> It will take you all of 30 seconds to format this, unless you're
> running badblocks (which almost nobody does, because...

it takes too much time?

I currently running a smart test on it, and it promised to take
10 hours to complete...

> You seem to be concerned about losing data.  You should be.  This is a
> physical storage device.  You WILL lose everything stored on it at
> some point in time.

Last time, I have managed to restore all the data from my 2.5" hard
drive that suddenly died about 7 years ago and hope to do it again
if any. :)

>  You mitigate this by one or more of:
> 1.  Not storing anything you mind losing on the drive, and then not
> complaining when you lose it.
> 2.  Keeping backups, preferably at a different physical location,
> using a periodically tested recovery methodology.
> 3.  Availability solutions like RAID (not the same as a backup, but it
> will mean less downtime WHEN you WILL have a drive failure).  Some
> filesystems like zfs/btrfs have specific ways of achieving this (and
> are generally more resistant to unreliable storage devices, which all
> storage devices are).
>
> I've actually had LVM eat my data once due to some kind of really rare
> bug (found one discussion of similar issues on some forum somewhere).

Aha!

> That isn't a good reason not to use LVM.  Wanting to plug the drive
> into a bunch of Windows machines would be a good reason not to use
> LVM, or ext4 for that matter.
>
> Most of the historic reasons for not having large volumes had to do
> with addressing limits, whether it be drive geometry limits,
> filesystem limits, etc.  Modern partition tables like GPT and
> filesystems can handle volumes MUCH larger than 5TB.
>
> Most modern journaling filesystems should also tend to avoid failure
> modes like losing the entire filesystem during a power failure (when
> correctly used, heaven help you if you follow a random friend's advice
> with mount options, like not using at least ordered data or disabling
> barriers).  But, bugs can exist, which is a big reason to have backups
> and not just trust your filesystem unless you don't care much about
> the data.

Thank you for replying.



Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 2:09 PM, Volker Armin Hemmann
 wrote:
>
> a common misconception. But not true at all. Google a bit.

Feel free to enlighten us.  My understanding is that data=journal
means that all data gets written first to the journal.  Completed
writes will make it to the main filesystem after a crash, and
incomplete writes will of course be rolled back, which is what you
want.

But simply disagreeing and saying to search Google is fairly useless,
since you can find all kinds of junk on Google.  You can't even
guarantee that the same search terms will lead to the same results for
two different people.

And FWIW, this is a topic that Linus and the ext3 authors have
disagreed with at points (not this specific question, but rather what
the most appropriate defaults are).  So, it isn't like there isn't
room for disagreement on best practice, or that any two people with
knowledge of the issues are guaranteed to agree.

>>
>> Now, I can still think of ways you can lose data in data=journal mode:
>>
>> * You mounted the filesystem with barrier=0 or with nobarrier; this can 
>> result
>
> not needed.

Well, duh.  He is telling people NOT to do this, because this is how
you can LOSE data.


>>
>> * Your application didn't flush its writes to disk when it should have.
>
> not needed either.

That very much depends on the application.  If you need to ensure that
transactions are in-sync with remote hosts (such as in a database) it
is absolutely critical to flush writes.

Applications shouldn't just flush on every write or close, because
that causes needless disk thrashing.  Yes, data will be lost if users
have write caching enabled, and users who would prefer a slow system
over one that loses more data when the power goes out should disable
caching or buy a UPS.

>
> nope.

Care to actually offer anything constructive?  His advice was
reasonably well-founded, even if I personally wouldn't do everything
exactly as he prefers to do so.

-- 
Rich



Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Volker Armin Hemmann
Am 31.08.2016 um 16:33 schrieb Michael Mol:
>
> In data=journal mode, the contents of files pass through the journal as well, 
> ensuring that, at least as far as the filesystem's responsibility is 
> concerned, 
> the data will be intact in the event of a crash.

a common misconception. But not true at all. Google a bit.
>
> Now, I can still think of ways you can lose data in data=journal mode:
>
> * You mounted the filesystem with barrier=0 or with nobarrier; this can 
> result 

not needed.

> in data writes going to disk out of order, if the I/O stack supports 
> barriers. 
> If you say "my file is ninety bytes" "here are ninety bytes of data, all 9s", 
> "my file is now thirty bytes", "here are thirty bytes of data, all 3s", then 
> in 
> the end you should have a thirty-byte file filled with 3s. If you have 
> barriers 
> enabled and you crash halfway through the whole process, you should find a 
> file 
> of ninety bytes, all 9s. But if you have barriers disabled, the data may hit 
> disk as though you'd said "my file is ninety bytes, here are ninety bytes of 
> data, all 9s, here are thirty bytes of data, all 3s, now my file is thirty 
> bytes." If that happens, and you crash partway through the commit to disk, 
> you 
> may see a ninety-byte file consisting of  thirty 3s and sixty 9s. Or things 
> may 
> landthat you see a thirty-byte file of 9s.
>
> * Your application didn't flush its writes to disk when it should have.

not needed either.

>
> * Your vm.dirty_bytes or vm.dirty_ratio are too high, you've been writing a 
> lot to disk, and the kernel still has a lot of data buffered waiting to be 
> written. (Well, that can always lead to data loss regardless of how high 
> those 
> settings are, which is why applications should flush their writes.)
>
> * You've used hdparm to enable write buffers in your hard disks, and your 
> hard 
> disks lose power while their buffers have data waiting to be written.
>
> * You're using a buggy disk device that does a poor job of handling power 
> loss. Such as some SSDs which don't have large enough capacitors for their 
> own 
> write reordering. Or just about any flash drive.
>
> * There's a bug in some code, somewhere.

nope.
> In-memory corruption of a data is a universal hazard. ECC should be the norm, 
> not the exception, honestly.
>




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 1:42 PM, Neil Bothwick  wrote:
> On Thu, 1 Sep 2016 09:27:39 -0400, Rich Freeman wrote:
>
>> > Honestly, I tend not to create separate partitions for separate mount
>> > points these days. At least, not on personal systems. For servers,
>> > it's can be beneficial to have /var separate from /, or /var/log
>> > separate from /var, or /var/spool, or /var/lib/mysql, or what have
>> > you. But the biggest driver for that, IME, is if one of those fills
>> > up, it can't take down the rest of the host.
>
>> The other big use case these days would be SSDs.  I tend to have one
>> SSD filesystem for root, and one SSD filesystem for everything else.
>> That means a lot of bind mounts, but it all works.
>
> Bind mounts? I thought you would use btrfs subvolumes!
>

Often the bind mounts point to btrfs subvolumes.

Yeah, I guess I could directly mount all those subvolumes, but I find
symlinks or bind mounts easier.  The other factor is that if I have
unnecessary subvolumes then I'm having to manage snapshots across more
of them and my snapshots are less atomic, since snapshots don't cross
subvolume boundaries (which is something which ought to be
configurable).


-- 
Rich



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 09:27:39 -0400, Rich Freeman wrote:

> > Honestly, I tend not to create separate partitions for separate mount
> > points these days. At least, not on personal systems. For servers,
> > it's can be beneficial to have /var separate from /, or /var/log
> > separate from /var, or /var/spool, or /var/lib/mysql, or what have
> > you. But the biggest driver for that, IME, is if one of those fills
> > up, it can't take down the rest of the host.

> The other big use case these days would be SSDs.  I tend to have one
> SSD filesystem for root, and one SSD filesystem for everything else.
> That means a lot of bind mounts, but it all works.

Bind mounts? I thought you would use btrfs subvolumes!


-- 
Neil Bothwick

Make it idiot proof and someone will make a better idiot.


pgpWky1T94bMH.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 10:55 AM, Michael Mol  wrote:
>
> The sad truth is that many (most?) users don't understand the idea of
> unmounting. Even Microsoft largely gave up, having flash drives "optimized for
> data safety" as opposed to "optimized for speed". While it'd be nice if the
> average John Doe would follow instructions, anyone who's worked in IT
> understands that the average John Doe...doesn't. And above-average ones assume
> they know better and don't have to.
>

If these users are the target of your OS then you should probably tune
the settings accordingly.

This mailing list notwitstanding (sometimes), I don't think this is
really Gentoo's core audience.

-- 
Rich



Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Michael Mol

On Thursday, September 01, 2016 04:21:18 PM J. Roeleveld wrote:
> On Thursday, September 01, 2016 08:41:39 AM Michael Mol wrote:
> > On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> > > On 31/08/2016 17:25, Grant wrote:
> > > >> Which NTFS system are you using?
> > > >> 
> > > >> ntfs kernel module? It's quite dodgy and unsafe with writes
> > > >> ntfs-ng on fuse? I find that one quite solid
> > > > 
> > > > I'm using ntfs-ng as opposed to the kernel option(s).
> > > 
> > > I'm offering 10 to 1 odds that your problems came from ... one that you
> > > yanked too soon
> > 
> > (pardon the in-line snip, while I get on my soap box)
> > 
> > The likelihood of this happening can be greatly reduced by setting
> > vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
> > something like 1048576. This prevents the kernel from queuing up as much
> > data for sending to disk. The application doing the copy or write will
> > normally report "complete" long before writes to slow media are
> > actually...complete. Setting vm.dirty_bytes to something low prevents the
> > kernel's backlog of data from getting so long.
> > 
> > vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
> > vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes.
> > If vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If
> > vm.dirty_bytes is set, vm.dirty_bytes_ratio will read 0.
> > 
> > The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
> > your memory can find itself used as a write buffer for data on its way to
> > a
> > filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that
> > the kernel may still be waiting to push through the filesystem layer! If
> > you're writing to, say, a class 10 SDHC card, the data may not be at rest
> > for another 40s after the application reports the copy operation is
> > complete!
> > 
> > If you've got a system with 8GiB of memory, multiply all that by four.
> > 
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
> 
> I would prefer to be able to have different settings per disk.
> Swappable drives like USB, I would put small numbers.
> But for built-in drives, I'd prefer to keep default values or tuned to the
> actual drive.

The problem is that's not really possible. vm.dirty_bytes and 
vm.dirty_background_bytes deal with the page cache, which sits at the VFS 
layer, not the block device layer. It could certainly make sense to apply it 
on a per-mount basis, though.

-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Michael Mol
On Thursday, September 01, 2016 09:35:15 AM Rich Freeman wrote:
> On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol  wrote:
> > The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> > badly broken and an insidious source of problems for both regular Linux
> > users and system administrators.
> 
> It depends on whether you tend to yank out drives without unmounting
> them,

The sad truth is that many (most?) users don't understand the idea of 
unmounting. Even Microsoft largely gave up, having flash drives "optimized for 
data safety" as opposed to "optimized for speed". While it'd be nice if the 
average John Doe would follow instructions, anyone who's worked in IT 
understands that the average John Doe...doesn't. And above-average ones assume 
they know better and don't have to.

As such, queuing up that much data while reporting to the user that the copy 
is already complete violates the principle of least surprise.

> or if you have a poorly-implemented database that doesn't know
> about fsync and tries to implement transactions across multiple hosts.

I don't know off the top of my head what database implementation would do that, 
though I could think of a dozen that could be vulnerable if they didn't sync 
properly.

The real culprit that comes to mind, for me, are copy tools. Whether it's dd, 
mv, cp, or a copy dialog in GNOME or KDE. I would love to see CoDeL-style 
time-based buffer sizes applied throughout the stack. The user may not care 
about how many milliseconds it takes for a read to turn into a completed write 
on the face of it, but they do like accurate time estimates and low latency 
UI.

> 
> The flip side of all of this is that you can save-save-save in your
> applications and not sit there and watch your application wait for the
> USB drive to catch up.  It also allows writes to be combined more
> efficiently (less of an issue for flash, but you probably can still
> avoid multiple rounds of overwriting data in place if multiple
> revisions come in succession, and metadata updating can be
> consolidated).

I recently got bit by vim's easytags causing saves to take a couple dozen 
seconds, leading me not to save as often as I used to. And then a bunch of 
code I wrote Monday...wasn't there any more. I was sad.

> 
> For a desktop-oriented workflow I'd think that having nice big write
> buffers would greatly improve the user experience, as long as you hit
> that unmount button or pay attention to that flashing green light
> every time you yank a drive.

Realistically, users aren't going to pay attention. You and I do, but that's 
because we understand the *why* behind the importance.

I love me fat write buffers for write combining, page caches etc. But, IMO, it 
shouldn't take longer than 1-2s (barring spinning rust disk wake) for full 
buffers to flush to disk; at modern write speeds (even for a slow spinning 
disc), that's going to be a dozen or so megabytes of data, which is plenty big 
for write-combining purposes.

-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] USB crucial file recovery

2016-09-01 Thread Stroller

> On 31 Aug 2016, at 16:25, Grant  wrote:
> 
>> Yes, FAT. It works and works well.
>> Or exFAT which is Microsoft's solution to the problem of very large
>> files on FAT.
> 
> FAT32 won't work for me since I need to use files larger than 4GB.  I
> know it's beta software but should exfat be more reliable than rtfs?

There's always `split`.

Very easy to use, just a little inconvenient to have to invoke it every time 
you copy a file to USB.

Stroller.




Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread J. Roeleveld
On Thursday, September 01, 2016 08:41:39 AM Michael Mol wrote:
> On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> > On 31/08/2016 17:25, Grant wrote:
> > >> Which NTFS system are you using?
> > >> 
> > >> ntfs kernel module? It's quite dodgy and unsafe with writes
> > >> ntfs-ng on fuse? I find that one quite solid
> > > 
> > > I'm using ntfs-ng as opposed to the kernel option(s).
> > 
> > I'm offering 10 to 1 odds that your problems came from ... one that you
> > yanked too soon
> 
> (pardon the in-line snip, while I get on my soap box)
> 
> The likelihood of this happening can be greatly reduced by setting
> vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to
> something like 1048576. This prevents the kernel from queuing up as much
> data for sending to disk. The application doing the copy or write will
> normally report "complete" long before writes to slow media are
> actually...complete. Setting vm.dirty_bytes to something low prevents the
> kernel's backlog of data from getting so long.
> 
> vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio.
> vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes. If
> vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If vm.dirty_bytes
> is set, vm.dirty_bytes_ratio will read 0.
> 
> The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of
> your memory can find itself used as a write buffer for data on its way to a
> filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that
> the kernel may still be waiting to push through the filesystem layer! If
> you're writing to, say, a class 10 SDHC card, the data may not be at rest
> for another 40s after the application reports the copy operation is
> complete!
> 
> If you've got a system with 8GiB of memory, multiply all that by four.
> 
> The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO,
> badly broken and an insidious source of problems for both regular Linux
> users and system administrators.

I would prefer to be able to have different settings per disk.
Swappable drives like USB, I would put small numbers.
But for built-in drives, I'd prefer to keep default values or tuned to the 
actual drive.

--
Joost



Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 8:41 AM, Michael Mol  wrote:
>
> The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO, badly
> broken and an insidious source of problems for both regular Linux users and
> system administrators.
>

It depends on whether you tend to yank out drives without unmounting
them, or if you have a poorly-implemented database that doesn't know
about fsync and tries to implement transactions across multiple hosts.

The flip side of all of this is that you can save-save-save in your
applications and not sit there and watch your application wait for the
USB drive to catch up.  It also allows writes to be combined more
efficiently (less of an issue for flash, but you probably can still
avoid multiple rounds of overwriting data in place if multiple
revisions come in succession, and metadata updating can be
consolidated).

For a desktop-oriented workflow I'd think that having nice big write
buffers would greatly improve the user experience, as long as you hit
that unmount button or pay attention to that flashing green light
every time you yank a drive.

-- 
Rich



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 8:51 AM, Michael Mol  wrote:
>
> Honestly, I tend not to create separate partitions for separate mount points
> these days. At least, not on personal systems. For servers, it's can be
> beneficial to have /var separate from /, or /var/log separate from /var, or
> /var/spool, or /var/lib/mysql, or what have you. But the biggest driver for
> that, IME, is if one of those fills up, it can't take down the rest of the
> host.
>

The other big use case these days would be SSDs.  I tend to have one
SSD filesystem for root, and one SSD filesystem for everything else.
That means a lot of bind mounts, but it all works.  I'm not about to
get into separate filesystems for random directories in var that tend
to get big.

-- 
Rich



Re: [gentoo-user] emerge @system

2016-09-01 Thread Peter Humphrey
On Tuesday 30 August 2016 22:51:47 Mick wrote:
> On Tuesday 30 Aug 2016 15:30:51 Peter Humphrey wrote:
> > On Tuesday 30 Aug 2016 13:38:13 J. Roeleveld wrote:
> > > On Tuesday, August 30, 2016 11:56:50 AM Peter Humphrey wrote:
> > > > On Tuesday 30 Aug 2016 12:06:43 Alan McKinnon wrote:
> > > > > You should elaborate more and be specific on what you mean by "The
> > > > > reason is an intermittent series of apparently unrelated things
> > > > > going
> > > > > wrong."
> > > > 
> > > > Here's one then: In KMail (yes, I know*) the folder list contains an
> > > > item
> > > > "trash" (ugh!), but when I come to empty it it's called "Wastebin"
> > > > (much
> > > > better) in the drop-down menu.
> > > 
> > > Check your language/internationisation settings.
> > > The translations is from "kde-apps/kdepim-l10n"
> > 
> > I can't find anything wrong with those settings. Everything is set to
> > British English.
> > 
> > $ locale -a
> > C
> > en_GB
> > en_GB.iso88591
> > en_GB.iso885915
> > en_GB.utf8
> > POSIX
> > 
> > $ eselect locale list
> > 
> > Available targets for the LANG variable:
> >   [1]   C
> >   [2]   en_GB
> >   [3]   en_GB.iso88591
> >   [4]   en_GB.iso885915
> >   [5]   en_GB.utf8 *
> >   [6]   POSIX
> >   [ ]   (free form)
> 
> This is all OK.
> 
> > > > And another: if I move my user account away and create a new one,
> > > > setting KDE plasma up from scratch (this is ~amd64), the
> > > > system-settings
> > > > panel has no icons and the single-click-to-open preference is
> > > > ignored,
> > > > even though it's the default and I already have it set anyway. Then,
> > > > if
> > > > I revert to the original home directory, which has followed events
> > > > through the last six months, those faults disappear.
> > > 
> > > Hmm... not sure where this comes from
> > 
> > But this time, the single-click-to-open preference is still ignored.
> > 
> > You see, the system is behaving inconsistently - even irrationally at
> > times.
> The latest Konqueror (stable) update to 4.14.20 fixed the one click
> problem, as well as opening files within Konqueror itself, rather than
> opening a separate dolphin window.  Dolphin is still borked and behaves
> /irrationally/.  I hope the next update will fix that too.

Thanks to all who've helped. I've now just completed a rebuild from bare 
metal and everything seems fine at the moment.

-- 
Rgds
Peter




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Michael Mol

On Thursday, September 01, 2016 12:09:09 PM gevisz wrote:
> 2016-09-01 11:54 GMT+03:00 Neil Bothwick :
> > On Thu, 1 Sep 2016 11:49:43 +0300, gevisz wrote:
> >> > If your filesystem becomes corrupt (and you are unable to
> >> > repair it), *all* of your data is lost (instead of just
> >> > one partition). That's the only disadvantage I can think
> >> > of.
> >> 
> >> That is exactly what I am afraid of!
> >> 
> >> So, the 20-years old rule of thumb is still valid. :(
> >> 
> >> > I don't like partitions either (after some years, I
> >> > always found that sizes don't match my requirements any
> >> > more),
> >> 
> >> And this is exactly the reason why I do not want to partition
> >> my new hard drive! :)
> > 
> > Have you considered LVM? You get the benefits of separate filesystems
> > without the limitations of inflexible partitioning.
> 
> I am afraid of LVM because of the same reason as described below:
> 
> returning to the "old good times" of MS DOS 6.22, I do remember that working
> then on 40MB (yes, megabytes) hard drive I used some program that
> compressed all the data before saving them on that hard drive.
> Unfortunately, one day, because of the corruption, I lost all the data on
> that hard drive. Since then, I am very much afraid of compressed or
> encrypted hard drives.

LVM doesn't *need* to do any of that. It will only do as much as you tell it 
to do. If you only want to use it as a way of reshaping relatively simple 
partitions, you can use it for that.

Honestly, I tend not to create separate partitions for separate mount points 
these days. At least, not on personal systems. For servers, it's can be 
beneficial to have /var separate from /, or /var/log separate from /var, or 
/var/spool, or /var/lib/mysql, or what have you. But the biggest driver for 
that, IME, is if one of those fills up, it can't take down the rest of the 
host.

In your case, I'd suggest using a single / filesystem. If it works, it works. 
If it doesn't, you'll know in the future where you need to be more flexible; 
there's no single panacea.

-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Michael Mol
On Wednesday, August 31, 2016 11:45:15 PM Alan McKinnon wrote:
> On 31/08/2016 17:25, Grant wrote:
> >> Which NTFS system are you using?
> >> 
> >> ntfs kernel module? It's quite dodgy and unsafe with writes
> >> ntfs-ng on fuse? I find that one quite solid
> > 
> > I'm using ntfs-ng as opposed to the kernel option(s).
> 
> I'm offering 10 to 1 odds that your problems came from ... one that you 
> yanked too soon

(pardon the in-line snip, while I get on my soap box)

The likelihood of this happening can be greatly reduced by setting 
vm.dirty_bytes to something like 2097125 and vm.dirty_background_bytes to 
something like 1048576. This prevents the kernel from queuing up as much data 
for sending to disk. The application doing the copy or write will normally 
report "complete" long before writes to slow media are actually...complete. 
Setting vm.dirty_bytes to something low prevents the kernel's backlog of data 
from getting so long.

vm.dirty_bytes has another, closely-related setting, vm.dirty_bytes_ratio. 
vm.dirty_bytes_ratio is a percentage of RAM that is used for dirty bytes. If 
vm.dirty_bytes_ratio is set, vm.dirty_bytes will read 0. If vm.dirty_bytes is 
set, vm.dirty_bytes_ratio will read 0.

The default is for vm.dirty_bytes_ratio to be 20, which means up to 20% of 
your memory can find itself used as a write buffer for data on its way to a 
filesystem. On a system with only 2GiB of RAM, that's 409MiB of data that the 
kernel may still be waiting to push through the filesystem layer! If you're 
writing to, say, a class 10 SDHC card, the data may not be at rest for another 
40s after the application reports the copy operation is complete!

If you've got a system with 8GiB of memory, multiply all that by four.

The defaults for vm.dirty_bytes and vm.dirty_background_bytes are, IMO, badly 
broken and an insidious source of problems for both regular Linux users and 
system administrators.

-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Dale
gevisz wrote:
> 2016-09-01 9:13 GMT+03:00 Alan McKinnon :
>> On 01/09/2016 08:04, gevisz wrote:
>>> I have bought an external 5TB Western Digital hard drive
>>> that I am going to use mainly for backing up some files
>>> in my home directory and carrying a very big files, for
>>> example a virtual machine image file, from one computer
>>> to another. This hard drive is preformatted with NTFS.
>>> Now, I am going to format it with ext4 which probably
>>> will take a lot of time taking into account that it is
>>> going to be done via USB connection. So, before formatting
>>> this hard drive I would like to know if it is still
>>> advisable to partition big hard drives into smaller
>>> logical ones.
>> it will take about 5 seconds to partition it.
>> And a few more to mkfs it.
> Just to partition - may be, but I very much doubt
> that it will take seconds to create a full-fledged
> ext4 file system on these 5TB via USB2 connention.
>
> Even more: my aquiantance from the Window world
> that recomended me this disc scared me that it may
> take days...
>

Something to think on.  You have a 5TB drive.  You format the whole
thing and let's say it takes 30 seconds.  Or, you break it into two
2.5TB partitions and then format those, which take 20 or 25 seconds
each.  That adds up to 40 to 50 seconds format time.  Isn't it faster to
format one large partition instead of two?  After all, you have to type
the command in to format it too which also takes a few seconds, assuming
you up arrow and just edit the partition letter.  No matter whether you
break the drive up into parts or not, you are still formatting 5TBs
worth of drive.  The only way you can save time is to not format the
whole thing. 

Things break.  They always have and always will.  Sure you can prepare
for that lose but if not careful, you could lose it while you are
second, third, forth etc etc etc guessing yourself and what tool you are
going to use.  I suspect that every file system out there has caused a
person to lose data before.  I'm sure that every brand and even model of
hard drive out there has caused someone to lose data before.  When it
gets as complex has a hard drive and the tools used on them, it has to
break at some point.  The best bet, duplicate your files just in case
something in the above list goes bad.  The only advice I think would be
good on this, don't use the same brand and model drive for both main and
backup.  One could even say not to use the same file system, that way if
one goes bad due to bad coding in the kernel, likely the other shouldn't
be affected, but even that can't be a for sure thing. 

I hope you aren't to worried about making a backup now that you can
think on how everything fails eventually.  ;-)

Dale

:-)  :-) 




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 12:09:09 +0300, gevisz wrote:

> > Have you considered LVM? You get the benefits of separate filesystems
> > without the limitations of inflexible partitioning.  
> 
> I am afraid of LVM because of the same reason as described below:
> 
> returning to the "old good times" of MS DOS 6.22, I do remember that
> working then on 40MB (yes, megabytes) hard drive I used some program
> that compressed all the data before saving them on that hard drive.
> Unfortunately, one day, because of the corruption, I lost all the data
> on that hard drive. Since then, I am very much afraid of compressed or
> encrypted hard drives.

LVM is neither encrypted nor compressed. The filesystems on it are no
different to the filesystems on physical partitions, and subject to the
same risks. An LVM logical volume is just a block device that is treated
the same as a physical partition on a non-LVM setup.

Sp far, you have come up with reasons, good or otherwise, for not taking
each of the available choices. You need to decide what you really need
and what is important to you. Only then can you decide on the best
arrangement for your needs.


-- 
Neil Bothwick

Evolution stops when stupidity is no longer fatal!


pgpCgd0ppLsNs.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Rich Freeman
On Thu, Sep 1, 2016 at 2:04 AM, gevisz  wrote:
>
> Is it still advisable to partition a big hard drive
> into smaller logical ones and why?
>

Assuming this is only used on Linux machines (you mentioned moving
files around), here is what I would do:

1. Definitely create a partition table.  Yes, I know some like to
stick filesystems on raw drives, but you're basically going to fight
all the automation in existence if you do this.
2. Set it up as an LVM partition.  Unless you're using filesystems
like zfs/btrfs that have their own way of doing volume management,
this just makes things less painful down the road.
3. I'd probably just set it up as one big logical volume, unless you
know you don't need all the space and you think you might use it for
something else later.  You can change your mind on this with ext4+lvm
either way, but better to start out whichever way seems best.

It will take you all of 30 seconds to format this, unless you're
running badblocks (which almost nobody does, because...).

You seem to be concerned about losing data.  You should be.  This is a
physical storage device.  You WILL lose everything stored on it at
some point in time.  You mitigate this by one or more of:
1.  Not storing anything you mind losing on the drive, and then not
complaining when you lose it.
2.  Keeping backups, preferably at a different physical location,
using a periodically tested recovery methodology.
3.  Availability solutions like RAID (not the same as a backup, but it
will mean less downtime WHEN you WILL have a drive failure).  Some
filesystems like zfs/btrfs have specific ways of achieving this (and
are generally more resistant to unreliable storage devices, which all
storage devices are).

I've actually had LVM eat my data once due to some kind of really rare
bug (found one discussion of similar issues on some forum somewhere).
That isn't a good reason not to use LVM.  Wanting to plug the drive
into a bunch of Windows machines would be a good reason not to use
LVM, or ext4 for that matter.

Most of the historic reasons for not having large volumes had to do
with addressing limits, whether it be drive geometry limits,
filesystem limits, etc.  Modern partition tables like GPT and
filesystems can handle volumes MUCH larger than 5TB.

Most modern journaling filesystems should also tend to avoid failure
modes like losing the entire filesystem during a power failure (when
correctly used, heaven help you if you follow a random friend's advice
with mount options, like not using at least ordered data or disabling
barriers).  But, bugs can exist, which is a big reason to have backups
and not just trust your filesystem unless you don't care much about
the data.

-- 
Rich



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 12:01 GMT+03:00 Alan McKinnon :
> On 01/09/2016 09:18, gevisz wrote:
>> 2016-09-01 9:13 GMT+03:00 Alan McKinnon :
>>> On 01/09/2016 08:04, gevisz wrote:
>
> [snip]
>
 Is it still advisable to partition a big hard drive
 into smaller logical ones and why?
>>>
>>> The only reason to partition a drive is to get 2 or more
>>> smaller ones that differ somehow (size, inode ratio, mount options, etc)
>>>
>>> Go with no partition table by all means, but if you one day find you
>>> need one, you will have to copy all your data off, repartition, and copy
>>> your data back. If you are certain that will not happen (eg you will
>>> rather buy a second drive) then by all means dispense with partitions.
>>>
>>> They are after all nothing more than a Microsoft invention from the 80s
>>> so people could install UCSD Pascal next to MS-DOS
>>
>> I definitely will not need more than one mount point for this hard drive
>> but I do remember some arguments that partitioning a large hard drive
>> into smaller logical ones gives me more safety in case a file system
>> suddenly will get corrupted because in this case I will loose my data
>> only on one of the logical partitions and not on the whole drive.
>>
>> Is this argument still valid nowadays?
>
> That is the most stupid dumbass argument I've heard in weeks.
> It doesn't even deserve a response.
>
> Who the fuck is promoting this shit?

Even somebody in this thread (in addition to me and independent of me)
made the same arguments.

But I do not state anything, I am just asking.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Alan McKinnon
On 01/09/2016 10:44, gevisz wrote:
> 2016-09-01 10:23 GMT+03:00 Frank Steinmetzger :
>> On Thu, Sep 01, 2016 at 08:13:23AM +0200, Alan McKinnon wrote:
>>
>>> it will take about 5 seconds to partition it.
>>> And a few more to mkfs it.
>>>
>>> Are you sure you aren't thinking of mkfs with ext2 (which did take hours
>>> for a drive that size?
>>
>> Some people do a full systems check (i.e. badblocks) before entrusting a
>> drive with anything important.
> 
> It is a good advice! I have already thought of this but I am sorry to
> acknowledge
> that, since the "old good times" of MS DOS 6.22, I never did this in Linux. :(
> 
> And except for one 2.5" disk failure on my old laptop about 7 years ago,
> I had no problem with this so far. :)
> 
> All other my hard disks work for about 10 years without any intervention
> from my side and even without any backups so far. That's why I started
> to think about it now. :)
> 
> So, can you, please, advice me about the program or utility that can do
> badblocks check for me?
> 
 Is it still advisable to partition a big hard drive
 into smaller logical ones and why?
>>>
>>> The only reason to partition a drive is to get 2 or more
>>> smaller ones that differ somehow (size, inode ratio, mount options, etc)
>>
>> If you want to do backups, then of course the file system is important, so
>> it retains permissions and stuff. Your ext4 choice is the right one in that
>> case. However, I partitioned by backupdrive into two partitions, so the one
>> with the sensitive data can be encrypted. The big partition that holds media
>> files has not got that treatment.
> 
> It is, again, a good advice but, again, returning to the "old good times"
> of MS DOS 6.22, I do remember that working then on 40MB (yes, megabytes)
> hard drive I used some program that compressed all the data before saving
> them on that hard drive. Unfortunately, one day, because of the corruption,
> I lost all the data on that hard drive. Since then, I am very much afraid of
> compressed or encrypted hard drives.
> 
>>> Go with no partition table by all means, but if you one day find you
>>> need one, you will have to copy all your data off, repartition, and copy
>>> your data back.
>>
>> When I do the mentioned partitioning scheme, I put the biggest partition at
>> the beginning of the drive and the smaller one(s) at the back. That way,
>> should I ever actually need to resize a partition, I only have to export the
>> smaller partition for the process (or none at all, if it’s just a backup
>> itself and I have another backup on another drive).
>> Of course there’s LVM these days, but up until recently, I used NTFS for the
>> media partition so I could also read it in $DUMB_OS, which doesn’t know LVM.
>> Only a short while back, I also switched to ext4 for that, so I can retain
>> file names with : and ? in them. But I still refrained from using LVM,
>> though.
> 
> I am afraid of LVM because of the same reason I described above.

You are allowing your fears of 20 years ago to determine your present
day attitudes. That's silly.

There's a misconception that a drive is somehow a pristine storage
device with pigeon holes where data goes and nothing below the fs level
can have any effect. Nothing could be further from the truth.

The sheer amount of encoding that goes into a drive is almost beyond
belief. It is NOT a digital device, it is analog - exactly like tape,
just many times more complex. It doesn't store bits, it stores
fluctuating regions of local magnetism and all of that gets decoded by
analog circuitry to represent digital bits. About 50% of your drive's
capacity (measured by what the heads see) is devoted to marking where
the tracks go, where the sectors start and end, checksumming and may
other firmware safeguards.

All that stuff can go wrong.

Yes, encryption on-drive can go wrong. So can SSL traffic, gpg keys,
encrypted mail, vpns and all your traffic over the internet (if you're
on a corporate network I can almost assure you it's all encrypted inside
an IPSec tunnel).

That stuff doesn't break. Neither does your disk encryption.

LVM can break, but it's really hard. All a PV is, is a block device with
a 2k signature at the start then some metadata. All a VG is, is a bunch
of PVs and some metadata to list them. An LV is really nothing more than
an indirection lookup table: The fs knows which inode and sectors the
file uses, and the LV has a mapping table to track which real disk
sectors that is.

The kernel does this all the time with every smidgen of RAM you have
(it's how a vmm works). It's the same technology in essence.

So yeah, stuff can break. But that same stuff is used in many other
highly critical areas where you likely don't know of it, and that
doesn't change that it's there.

I know of know recent reports where disk encryption or volume management
broke data solely due to a code bug in stable production versions. Devs
are not that stupid :-)

So honestly, your 

Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Alan McKinnon
On 01/09/2016 10:59, gevisz wrote:
> 2016-09-01 11:03 GMT+03:00 Neil Bothwick :
>> On Thu, 1 Sep 2016 10:18:29 +0300, gevisz wrote:
>>
 it will take about 5 seconds to partition it.
 And a few more to mkfs it.
>>>
>>> Just to partition - may be, but I very much doubt
>>> that it will take seconds to create a full-fledged
>>> ext4 file system on these 5TB via USB2 connention.
>>
>> Even if that were the case, does it really matter?
> 
> You are right: it does not matter for the main question.
> However, it is an additional reason to think twice and
> ask a knowledgeable people before starting. :)
> 
>> You said you wanted to use this drive for backups,
>> surely doing it right is more important than doing
>> it quickly. It's not like you have to hold its hand
>> while mkfs is running.
> 
> But I would have to keep my fingers crossed so that
> there would not be a sudden blackout during the formatting
> hard disk anyway. :)

Even if it does, so what?

Unplug, start over. It's a mkfs, it lays down where the inodes are. You
can do it over and over and over and over  safely


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 12:04 GMT+03:00 Neil Bothwick :
> On Thu, 1 Sep 2016 11:59:55 +0300, gevisz wrote:
>
>> > You said you wanted to use this drive for backups,
>> > surely doing it right is more important than doing
>> > it quickly. It's not like you have to hold its hand
>> > while mkfs is running.
>>
>> But I would have to keep my fingers crossed so that
>> there would not be a sudden blackout during the formatting
>> hard disk anyway. :)
>
> Nah, just start again. It's only after the power goes out for the third
> time that you decide the Universe hates you and give up!

:)

> Neil Bothwick
>
> Q: What's the second worst sound you can hear a sysadmin make?
> A: Uh-oh
> Q: And the worst sound?
> A: Oops



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 11:54 GMT+03:00 Neil Bothwick :
> On Thu, 1 Sep 2016 11:49:43 +0300, gevisz wrote:
>
>> > If your filesystem becomes corrupt (and you are unable to
>> > repair it), *all* of your data is lost (instead of just
>> > one partition). That's the only disadvantage I can think
>> > of.
>>
>> That is exactly what I am afraid of!
>>
>> So, the 20-years old rule of thumb is still valid. :(
>>
>> > I don't like partitions either (after some years, I
>> > always found that sizes don't match my requirements any
>> > more),
>>
>> And this is exactly the reason why I do not want to partition
>> my new hard drive! :)
>
> Have you considered LVM? You get the benefits of separate filesystems
> without the limitations of inflexible partitioning.

I am afraid of LVM because of the same reason as described below:

returning to the "old good times" of MS DOS 6.22, I do remember that working
then on 40MB (yes, megabytes) hard drive I used some program that compressed
all the data before saving them on that hard drive. Unfortunately, one day,
because of the corruption, I lost all the data on that hard drive. Since then,
I am very much afraid of compressed or encrypted hard drives.

> Neil Bothwick
>
> For a list of all the ways technology has failed to improve the
> quality of life, please press three.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 11:55 GMT+03:00 Frank Steinmetzger :
> On Thu, Sep 01, 2016 at 11:44:19AM +0300, gevisz wrote:
>
>> > Some people do a full systems check (i.e. badblocks) before entrusting a
>> > drive with anything important.
>>
>> It is a good advice! I have already thought of this but I am sorry to
>> acknowledge that, since the "old good times" of MS DOS 6.22, I never did
>> this in Linux. :(
>> […]
>> So, can you, please, advice me about the program or utility that can do
>> badblocks check for me?
>
> Badblocks is part of e2fsprogs. But since you’re using USB2, this will
> really take a while. At best I get 39 MB/s out of it. Another way is a
> S.M.A.R.T. test, methinks `smartctl -t full` is the command for that. But I
> don’t know what exactly is being tested there. But it runs fully internal of
> the disk, so no USB2-bottleneck. Others may chime in if I tell fairy tales.

Thank you for your advices. I will try both.

> Gruß | Greetings | Qapla'
> I cna ytpe 300 wrods pre mniuet!!!

:)



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 11:59:55 +0300, gevisz wrote:

> > You said you wanted to use this drive for backups,
> > surely doing it right is more important than doing
> > it quickly. It's not like you have to hold its hand
> > while mkfs is running.  
> 
> But I would have to keep my fingers crossed so that
> there would not be a sudden blackout during the formatting
> hard disk anyway. :)

Nah, just start again. It's only after the power goes out for the third
time that you decide the Universe hates you and give up!


-- 
Neil Bothwick

Q: What's the second worst sound you can hear a sysadmin make?
A: Uh-oh
Q: And the worst sound?
A: Oops


pgp0jGVSn93tr.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Alan McKinnon
On 01/09/2016 09:18, gevisz wrote:
> 2016-09-01 9:13 GMT+03:00 Alan McKinnon :
>> On 01/09/2016 08:04, gevisz wrote:

[snip]

>> it will take about 5 seconds to partition it.
>> And a few more to mkfs it.
> 
> Just to partition - may be, but I very much doubt
> that it will take seconds to create a full-fledged
> ext4 file system on these 5TB via USB2 connention.


Do it. Tell me how long it tool.

Discussing it without doing it and offering someone else's opinion is a
100% worthless activity

> 
> Even more: my aquiantance from the Window world
> that recomended me this disc scared me that it may
> take days...

Mickey Mouse told me it takes microseconds. So what?

Do it. Tell me how long it took.

>>> Is it still advisable to partition a big hard drive
>>> into smaller logical ones and why?
>>
>> The only reason to partition a drive is to get 2 or more
>> smaller ones that differ somehow (size, inode ratio, mount options, etc)
>>
>> Go with no partition table by all means, but if you one day find you
>> need one, you will have to copy all your data off, repartition, and copy
>> your data back. If you are certain that will not happen (eg you will
>> rather buy a second drive) then by all means dispense with partitions.
>>
>> They are after all nothing more than a Microsoft invention from the 80s
>> so people could install UCSD Pascal next to MS-DOS
> 
> I definitely will not need more than one mount point for this hard drive
> but I do remember some arguments that partitioning a large hard drive
> into smaller logical ones gives me more safety in case a file system
> suddenly will get corrupted because in this case I will loose my data
> only on one of the logical partitions and not on the whole drive.
> 
> Is this argument still valid nowadays?

That is the most stupid dumbass argument I've heard in weeks.
It doesn't even deserve a response.

Who the fuck is promoting this shit?


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 11:03 GMT+03:00 Neil Bothwick :
> On Thu, 1 Sep 2016 10:18:29 +0300, gevisz wrote:
>
>> > it will take about 5 seconds to partition it.
>> > And a few more to mkfs it.
>>
>> Just to partition - may be, but I very much doubt
>> that it will take seconds to create a full-fledged
>> ext4 file system on these 5TB via USB2 connention.
>
> Even if that were the case, does it really matter?

You are right: it does not matter for the main question.
However, it is an additional reason to think twice and
ask a knowledgeable people before starting. :)

> You said you wanted to use this drive for backups,
> surely doing it right is more important than doing
> it quickly. It's not like you have to hold its hand
> while mkfs is running.

But I would have to keep my fingers crossed so that
there would not be a sudden blackout during the formatting
hard disk anyway. :)

> Neil Bothwick
>
> Bagpipe for free: Stuff cat under arm. Pull legs, chew tail.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Frank Steinmetzger
On Thu, Sep 01, 2016 at 11:44:19AM +0300, gevisz wrote:

> > Some people do a full systems check (i.e. badblocks) before entrusting a
> > drive with anything important.
> 
> It is a good advice! I have already thought of this but I am sorry to
> acknowledge that, since the "old good times" of MS DOS 6.22, I never did
> this in Linux. :(
> […]
> So, can you, please, advice me about the program or utility that can do
> badblocks check for me?

Badblocks is part of e2fsprogs. But since you’re using USB2, this will
really take a while. At best I get 39 MB/s out of it. Another way is a
S.M.A.R.T. test, methinks `smartctl -t full` is the command for that. But I
don’t know what exactly is being tested there. But it runs fully internal of
the disk, so no USB2-bottleneck. Others may chime in if I tell fairy tales.

-- 
Gruß | Greetings | Qapla'
I cna ytpe 300 wrods pre mniuet!!!



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 11:49:43 +0300, gevisz wrote:

> > If your filesystem becomes corrupt (and you are unable to
> > repair it), *all* of your data is lost (instead of just
> > one partition). That's the only disadvantage I can think
> > of.  
> 
> That is exactly what I am afraid of!
> 
> So, the 20-years old rule of thumb is still valid. :(
> 
> > I don't like partitions either (after some years, I
> > always found that sizes don't match my requirements any
> > more),  
> 
> And this is exactly the reason why I do not want to partition
> my new hard drive! :)

Have you considered LVM? You get the benefits of separate filesystems
without the limitations of inflexible partitioning.


-- 
Neil Bothwick

For a list of all the ways technology has failed to improve the
quality of life, please press three.


pgpmVQ8DlMG5Z.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 10:30 GMT+03:00 Matthias Hanft :
> gevisz wrote:
>>
>> But what are disadvantages of not partitioning a big
>> hard drive into smaller logical ones?
>
> If your filesystem becomes corrupt (and you are unable to
> repair it), *all* of your data is lost (instead of just
> one partition). That's the only disadvantage I can think
> of.

That is exactly what I am afraid of!

So, the 20-years old rule of thumb is still valid. :(

> I don't like partitions either (after some years, I
> always found that sizes don't match my requirements any
> more),

And this is exactly the reason why I do not want to partition
my new hard drive! :)

> and therefore, on my new server, I didn't create
> any other partitions than "boot":
>
> home01 ~ # df -h -T
> Filesystem Type  Size  Used Avail Use% Mounted on
> /dev/sda4  xfs17T   14T  2.9T  83% /
> devtmpfs   devtmpfs   10M 0   10M   0% /dev
> tmpfs  tmpfs 3.2G  644K  3.2G   1% /run
> shmtmpfs  16G  512K   16G   1% /dev/shm
> /dev/sda2  ext2  124M   46M   72M  39% /boot
> ACDFusefuse.ACDFuse  100T  284M  100T   1% /mnt/acd
> /dev/sdb1  ext3  2.7T  707G  1.9T  28% /mnt/toshiba
> home01 ~ #
>
> Backup of important and/or secret files goes to an external
> USB hard drive (sdb1) which is formatted with ext3 for
> maximum compatibility (every other Linux can read this
> without kernel hacks); backup of not-so-secret files goes
> to Amazon Cloud Drive (acd) or some other "cloud"; unimportant
> files (videos which I have on DVD or BD anyway, or can re-buy)
> just aren't backed up.
>
> -Matt
>
>



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 10:23 GMT+03:00 Frank Steinmetzger :
> On Thu, Sep 01, 2016 at 08:13:23AM +0200, Alan McKinnon wrote:
>
>> it will take about 5 seconds to partition it.
>> And a few more to mkfs it.
>>
>> Are you sure you aren't thinking of mkfs with ext2 (which did take hours
>> for a drive that size?
>
> Some people do a full systems check (i.e. badblocks) before entrusting a
> drive with anything important.

It is a good advice! I have already thought of this but I am sorry to
acknowledge
that, since the "old good times" of MS DOS 6.22, I never did this in Linux. :(

And except for one 2.5" disk failure on my old laptop about 7 years ago,
I had no problem with this so far. :)

All other my hard disks work for about 10 years without any intervention
from my side and even without any backups so far. That's why I started
to think about it now. :)

So, can you, please, advice me about the program or utility that can do
badblocks check for me?

>> > Is it still advisable to partition a big hard drive
>> > into smaller logical ones and why?
>>
>> The only reason to partition a drive is to get 2 or more
>> smaller ones that differ somehow (size, inode ratio, mount options, etc)
>
> If you want to do backups, then of course the file system is important, so
> it retains permissions and stuff. Your ext4 choice is the right one in that
> case. However, I partitioned by backupdrive into two partitions, so the one
> with the sensitive data can be encrypted. The big partition that holds media
> files has not got that treatment.

It is, again, a good advice but, again, returning to the "old good times"
of MS DOS 6.22, I do remember that working then on 40MB (yes, megabytes)
hard drive I used some program that compressed all the data before saving
them on that hard drive. Unfortunately, one day, because of the corruption,
I lost all the data on that hard drive. Since then, I am very much afraid of
compressed or encrypted hard drives.

>> Go with no partition table by all means, but if you one day find you
>> need one, you will have to copy all your data off, repartition, and copy
>> your data back.
>
> When I do the mentioned partitioning scheme, I put the biggest partition at
> the beginning of the drive and the smaller one(s) at the back. That way,
> should I ever actually need to resize a partition, I only have to export the
> smaller partition for the process (or none at all, if it’s just a backup
> itself and I have another backup on another drive).
> Of course there’s LVM these days, but up until recently, I used NTFS for the
> media partition so I could also read it in $DUMB_OS, which doesn’t know LVM.
> Only a short while back, I also switched to ext4 for that, so I can retain
> file names with : and ? in them. But I still refrained from using LVM,
> though.

I am afraid of LVM because of the same reason I described above.

> Gruß | Greetings | Qapla’
> ’ve been using vi for 15 years, because I don’t know with which command
> to close it.

:)



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Neil Bothwick
On Thu, 1 Sep 2016 10:18:29 +0300, gevisz wrote:

> > it will take about 5 seconds to partition it.
> > And a few more to mkfs it.  
> 
> Just to partition - may be, but I very much doubt
> that it will take seconds to create a full-fledged
> ext4 file system on these 5TB via USB2 connention.

Even if that were the case, does it really matter? You said you wanted to
use this drive for backups, surely doing it right is more important than
doing it quickly. It's not like you have to hold its hand while mkfs is
running.


-- 
Neil Bothwick

Bagpipe for free: Stuff cat under arm. Pull legs, chew tail.


pgp8QcADrX2iP.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Matthias Hanft
gevisz wrote:
> 
> But what are disadvantages of not partitioning a big
> hard drive into smaller logical ones?

If your filesystem becomes corrupt (and you are unable to
repair it), *all* of your data is lost (instead of just
one partition). That's the only disadvantage I can think
of. I don't like partitions either (after some years, I
always found that sizes don't match my requirements any
more), and therefore, on my new server, I didn't create
any other partitions than "boot":

home01 ~ # df -h -T
Filesystem Type  Size  Used Avail Use% Mounted on
/dev/sda4  xfs17T   14T  2.9T  83% /
devtmpfs   devtmpfs   10M 0   10M   0% /dev
tmpfs  tmpfs 3.2G  644K  3.2G   1% /run
shmtmpfs  16G  512K   16G   1% /dev/shm
/dev/sda2  ext2  124M   46M   72M  39% /boot
ACDFusefuse.ACDFuse  100T  284M  100T   1% /mnt/acd
/dev/sdb1  ext3  2.7T  707G  1.9T  28% /mnt/toshiba
home01 ~ #

Backup of important and/or secret files goes to an external
USB hard drive (sdb1) which is formatted with ext3 for
maximum compatibility (every other Linux can read this
without kernel hacks); backup of not-so-secret files goes
to Amazon Cloud Drive (acd) or some other "cloud"; unimportant
files (videos which I have on DVD or BD anyway, or can re-buy)
just aren't backed up.

-Matt




Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Frank Steinmetzger
On Thu, Sep 01, 2016 at 08:13:23AM +0200, Alan McKinnon wrote:

> it will take about 5 seconds to partition it.
> And a few more to mkfs it.
>
> Are you sure you aren't thinking of mkfs with ext2 (which did take hours
> for a drive that size?

Some people do a full systems check (i.e. badblocks) before entrusting a
drive with anything important.

> > Is it still advisable to partition a big hard drive
> > into smaller logical ones and why?
> 
> The only reason to partition a drive is to get 2 or more
> smaller ones that differ somehow (size, inode ratio, mount options, etc)

If you want to do backups, then of course the file system is important, so
it retains permissions and stuff. Your ext4 choice is the right one in that
case. However, I partitioned by backupdrive into two partitions, so the one
with the sensitive data can be encrypted. The big partition that holds media
files has not got that treatment.

> Go with no partition table by all means, but if you one day find you
> need one, you will have to copy all your data off, repartition, and copy
> your data back.

When I do the mentioned partitioning sceme, I put the biggest partition at
the beginning of the drive and the smaller one(s) at the back. That way,
should I ever actually need to resize a partition, I only have to export the
smaller partition for the process (or none at all, if it’s just a backup
itself and I have another backup on another drive).
Of course there’s LVM these days, but up until recently, I used NTFS for the
media partition so I could also read it in $DUMB_OS, which doesn’t know LVM.
Only a short while back, I also switched to ext4 for that, so I can retain
file names with : and ? in them. But I still refrained from using LVM,
though.

-- 
Gruß | Greetings | Qapla’
’ve been using vi for 15 years, because I don’t know with which command
to close it.



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
2016-09-01 9:13 GMT+03:00 Alan McKinnon :
> On 01/09/2016 08:04, gevisz wrote:
>> I have bought an external 5TB Western Digital hard drive
>> that I am going to use mainly for backing up some files
>> in my home directory and carrying a very big files, for
>> example a virtual machine image file, from one computer
>> to another. This hard drive is preformatted with NTFS.
>> Now, I am going to format it with ext4 which probably
>> will take a lot of time taking into account that it is
>> going to be done via USB connection. So, before formatting
>> this hard drive I would like to know if it is still
>> advisable to partition big hard drives into smaller
>> logical ones.
>
> it will take about 5 seconds to partition it.
> And a few more to mkfs it.

Just to partition - may be, but I very much doubt
that it will take seconds to create a full-fledged
ext4 file system on these 5TB via USB2 connention.

Even more: my aquiantance from the Window world
that recomended me this disc scared me that it may
take days...

> Are you sure you aren't thinking of mkfs with ext2
> (which did take hours for a drive that size?
>
>>
>> For about 20 last years, following an advice of my older
>> colleague, I always partitioned all my hard drives into
>> the smaller logical ones and do very well know all
>> disadvantages of doing so. :)
>
> So you are following 20 year-old advice for hardware relevant to 20
> years ago and not taking tech advances into account ? :-)

Yes. But, please, take into account that after these 20 years
I decided to reconsider the old "rule of thumb." :)

>> But what are disadvantages of not partitioning a big
>> hard drive into smaller logical ones?
>
> You only get 1 mount point
> Some ancient software might whinge and complain about not having a
> partition table present.
> The drive vendor no longer has a place to put their magic sekrit
> phone-home data collection stuff. Oh wait, that's a benefit and belongs
> below
>
>>
>> Is it still advisable to partition a big hard drive
>> into smaller logical ones and why?
>
> The only reason to partition a drive is to get 2 or more
> smaller ones that differ somehow (size, inode ratio, mount options, etc)
>
> Go with no partition table by all means, but if you one day find you
> need one, you will have to copy all your data off, repartition, and copy
> your data back. If you are certain that will not happen (eg you will
> rather buy a second drive) then by all means dispense with partitions.
>
> They are after all nothing more than a Microsoft invention from the 80s
> so people could install UCSD Pascal next to MS-DOS

I definitely will not need more than one mount point for this hard drive
but I do remember some arguments that partitioning a large hard drive
into smaller logical ones gives me more safety in case a file system
suddenly will get corrupted because in this case I will loose my data
only on one of the logical partitions and not on the whole drive.

Is this argument still valid nowadays?



Re: [gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread Alan McKinnon
On 01/09/2016 08:04, gevisz wrote:
> I have bought an external 5TB Western Digital hard drive
> that I am going to use mainly for backing up some files
> in my home directory and carrying a very big files, for
> example a virtual machine image file, from one computer
> to another. This hard drive is preformatted with NTFS.
> Now, I am going to format it with ext4 which probably
> will take a lot of time taking into account that it is
> going to be done via USB connection. So, before formatting
> this hard drive I would like to know if it is still
> advisable to partition big hard drives into smaller
> logical ones.

it will take about 5 seconds to partition it.
And a few more to mkfs it.

Are you sure you aren't thinking of mkfs with ext2 (which did take hours
for a drive that size?

> 
> For about 20 last years, following an advice of my older
> colleague, I always partitioned all my hard drives into
> the smaller logical ones and do very well know all
> disadvantages of doing so. :)

So you are following 20 year-old advice for hardware relevant to 20
years ago and not taking tech advances into account ? :-)

> 
> But what are disadvantages of not partitioning a big
> hard drive into smaller logical ones?

You only get 1 mount point
Some ancient software might whinge and complain about not having a
partition table present.
The drive vendor no longer has a place to put their magic sekrit
phone-home data collection stuff. Oh wait, that's a benefit and belongs
below

> 
> Is it still advisable to partition a big hard drive
> into smaller logical ones and why?

The only reason to partition a drive is to get 2 or more
smaller ones that differ somehow (size, inode ratio, mount options, etc)

Go with no partition table by all means, but if you one day find you
need one, you will have to copy all your data off, repartition, and copy
your data back. If you are certain that will not happen (eg you will
rather buy a second drive) then by all means dispense with partitions.

They are after all nothing more than a Microsoft invention from the 80s
so people could install UCSD Pascal next to MS-DOS


-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] [OT] Is it still advisable to partition a big hard drive?

2016-09-01 Thread gevisz
I have bought an external 5TB Western Digital hard drive
that I am going to use mainly for backing up some files
in my home directory and carrying a very big files, for
example a virtual machine image file, from one computer
to another. This hard drive is preformatted with NTFS.
Now, I am going to format it with ext4 which probably
will take a lot of time taking into account that it is
going to be done via USB connection. So, before formatting
this hard drive I would like to know if it is still
advisable to partition big hard drives into smaller
logical ones.

For about 20 last years, following an advice of my older
colleague, I always partitioned all my hard drives into
the smaller logical ones and do very well know all
disadvantages of doing so. :)

But what are disadvantages of not partitioning a big
hard drive into smaller logical ones?

Is it still advisable to partition a big hard drive
into smaller logical ones and why?



Re: [gentoo-user] Re: USB crucial file recovery

2016-09-01 Thread Alan McKinnon
On 01/09/2016 05:42, J. Roeleveld wrote:
> On August 31, 2016 11:45:15 PM GMT+02:00, Alan McKinnon 
>  wrote:
>> On 31/08/2016 17:25, Grant wrote:
>  Is there a
> filesystem that will make that unnecessary and exhibit better
> reliability than NTFS?

 Yes, FAT. It works and works well.
 Or exFAT which is Microsoft's solution to the problem of very large
 files on FAT.
>>>
>>>
>>> FAT32 won't work for me since I need to use files larger than 4GB.  I
>>> know it's beta software but should exfat be more reliable than ntfs?
>>
>> It doesn't do all the fancy journalling that ntfs does, so based solely
>>
>> on complexity, it ought to be more reliable.
>>
>> None of us have done real tests and mentioned it here, so we really 
>> don't know how it pans out in the real world.
>>
>> Do a bunch of tests yourself and decide
> 
> When I was a student, one of my professors used FAT to explain how 
> filesystems work. The reason for this is that the actual filesystem is quite 
> simple to follow and fixing can actually be done by hand using a hex editor.
> 
> This is no longer possible with other filesystems.
> 
> Then again, a lot of embedded devices (especially digital cameras) don't even 
> get FAT correctly. Leading to broken images.
> Those implementations are broken at the point where fragmentation would occur.
> Solution: never delete pictures on the camera. Simply move them off and do it 
> on a computer.
> 
 Which NTFS system are you using?

 ntfs kernel module? It's quite dodgy and unsafe with writes
 ntfs-ng on fuse? I find that one quite solid
>>>
>>>
>>> I'm using ntfs-ng as opposed to the kernel option(s).
>>
>> I'm offering 10 to 1 odds that your problems came from a faulty USB 
>> stick, or maybe one that you yanked too soon
> 
> I'm with Alan here. I have seen too many handout USB sticks from conferences 
> that don't last. I only use them for:
> Quickly moving a file from A to B.
> Booting the latest sysresccd
> Scanning a document
> Printing a PDF
> (For last 2, my printer has a USB slot)
> 
> Important files are stored on my NAS which is backed up regularly.

Indeed. The trouble with backups is that they are difficult to get
right, time consuming, easy to ignore, and very very expensive (time and
money wise)


-- 
Alan McKinnon
alan.mckin...@gmail.com