Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread J. Roeleveld
On August 31, 2016 11:45:15 PM GMT+02:00, Alan McKinnon 
 wrote:
>On 31/08/2016 17:25, Grant wrote:
  Is there a
 filesystem that will make that unnecessary and exhibit better
 reliability than NTFS?
>>>
>>> Yes, FAT. It works and works well.
>>> Or exFAT which is Microsoft's solution to the problem of very large
>>> files on FAT.
>>
>>
>> FAT32 won't work for me since I need to use files larger than 4GB.  I
>> know it's beta software but should exfat be more reliable than ntfs?
>
>It doesn't do all the fancy journalling that ntfs does, so based solely
>
>on complexity, it ought to be more reliable.
>
>None of us have done real tests and mentioned it here, so we really 
>don't know how it pans out in the real world.
>
>Do a bunch of tests yourself and decide

When I was a student, one of my professors used FAT to explain how filesystems 
work. The reason for this is that the actual filesystem is quite simple to 
follow and fixing can actually be done by hand using a hex editor.

This is no longer possible with other filesystems.

Then again, a lot of embedded devices (especially digital cameras) don't even 
get FAT correctly. Leading to broken images.
Those implementations are broken at the point where fragmentation would occur.
Solution: never delete pictures on the camera. Simply move them off and do it 
on a computer.

>>> Which NTFS system are you using?
>>>
>>> ntfs kernel module? It's quite dodgy and unsafe with writes
>>> ntfs-ng on fuse? I find that one quite solid
>>
>>
>> I'm using ntfs-ng as opposed to the kernel option(s).
>
>I'm offering 10 to 1 odds that your problems came from a faulty USB 
>stick, or maybe one that you yanked too soon

I'm with Alan here. I have seen too many handout USB sticks from conferences 
that don't last. I only use them for:
Quickly moving a file from A to B.
Booting the latest sysresccd
Scanning a document
Printing a PDF
(For last 2, my printer has a USB slot)

Important files are stored on my NAS which is backed up regularly.

--
Joost


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



[gentoo-user] x-rite i1 display3 support

2016-08-31 Thread Devrin Talen
I'm trying to use an X-rite i1 Display 3 to calibrate my laptop screen in
gnome 3 and running into some trouble.  The color calibration settings
screen opens up as soon as I plug in the calibrator, and the first error I
see is after I've gone through all the options and clicked the start button
to actually begin calibrating.

At that point the error message is:

An internal error occurred that could not be recovered.
You can remove the calibration device.

This error occurs immediately after clicking the button.

So I googled around and I found a few links to folks having similar issues:

[1]: https://github.com/hughsie/colord/pull/29
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1297167
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1000910

Out of those, the third seems to be the closest to the issue I'm having.
>From what I can see, the USB device is enumerated properly by the kernel:

% dmesg # snipped the relevant stuff:
[ 2406.995020] usb 1-1.2: new full-speed USB device number 8 using ehci-pci
[ 2407.083266] usb 1-1.2: New USB device found, idVendor=0765,
idProduct=5021
[ 2407.083275] usb 1-1.2: New USB device strings: Mfr=1, Product=2,
SerialNumber=0
[ 2407.083279] usb 1-1.2: Product: i1Display3 Bootloader
[ 2407.083283] usb 1-1.2: Manufacturer: X-Rite Inc.
[ 2407.086060] hid-generic 0003:0765:5021.0004: hiddev0,hidraw0: USB HID
v1.11 Device [X-Rite Inc. i1Display3 Bootloader] on
usb-:00:1a.0-1.2/input0
[ 2407.842362] usb 1-1.2: USB disconnect, device number 8
[ 2408.019082] usb 1-1.2: new full-speed USB device number 9 using ehci-pci
[ 2408.107524] usb 1-1.2: New USB device found, idVendor=0765,
idProduct=5020
[ 2408.107531] usb 1-1.2: New USB device strings: Mfr=1, Product=2,
SerialNumber=0
[ 2408.107535] usb 1-1.2: Product: i1Display3
[ 2408.107538] usb 1-1.2: Manufacturer: X-Rite, Inc.
[ 2408.109804] hid-generic 0003:0765:5020.0005: hiddev0,hidraw0: USB HID
v1.11 Device [X-Rite, Inc. i1Display3] on usb-:00:1a.0-1.2/input0

And udev seems to be happy about it:

% udevadm monitor --environment --udev # plugging in the device:
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing

UDEV  [3659.893310] add  /class/usbmisc (class)
ACTION=add
DEVPATH=/class/usbmisc
SEQNUM=3434
SUBSYSTEM=class
USEC_INITIALIZED=3659893139

UDEV  [3659.894709] add
/devices/pci:00/:00:1a.0/usb1/1-1/1-1.2 (usb)
ACTION=add
BUSNUM=001
DEVNAME=/dev/bus/usb/001/012
DEVNUM=012
DEVPATH=/devices/pci:00/:00:1a.0/usb1/1-1/1-1.2
DEVTYPE=usb_device
DRIVER=usb
ID_BUS=usb
ID_MODEL=i1Display3_Bootloader
ID_MODEL_ENC=i1Display3\x20Bootloader
ID_MODEL_ID=5021
ID_REVISION=0001
ID_SERIAL=X-Rite_Inc._i1Display3_Bootloader
ID_USB_INTERFACES=:03:
ID_VENDOR=X-Rite_Inc.
ID_VENDOR_ENC=X-Rite\x20Inc.
ID_VENDOR_FROM_DATABASE=X-Rite, Inc.
ID_VENDOR_ID=0765
MAJOR=189
MINOR=11
PRODUCT=765/5021/1
SEQNUM=3431
SUBSYSTEM=usb
TYPE=0/0/0
USEC_INITIALIZED=3659894455
...

But the calibration isn't working.  Here's the bit of the journal showing
gnome complaining when I hit the start button:

% journalctl -xb # here's the relevant part:
Aug 30 22:36:23 luigi colord[1036]: (colord:1036): Cd-WARNING **: the child
exited with return code 1
Aug 30 22:36:23 luigi gnome-session[1374]: (gnome-control-center:1829):
color-cc-panel-WARNING **: calibration failed with code 1: argyll-spotread
exited unexpectedly
Aug 30 22:36:24 luigi colord[1036]: (colord:1036): Cd-WARNING **: no child
pid to kill!
Aug 30 22:36:24 luigi gnome-session[1374]: (gnome-control-center:1829):
color-cc-panel-WARNING **: failed to start calibrate: failed to calibrate

So when I saw that I started running argyll-spotread on its own to see if
that would still fail:

% argyll-spotread -v -D4
usb_check_and_add: found instrument vid 0x0765, pid 0x5020
new_inst: called with path '/dev/bus/usb/001/013 (X-Rite i1 DisplayPro,
ColorMunki Display)'
Connecting to the instrument ..
i1d3_init_coms: called
i1d3_init_coms: About to init USB
usb_open_port: open port '/dev/bus/usb/001/013' succeeded
i1d3_command: Sending cmd 'GetStatus' args '00 01 00 00 00 00 00 00'
coms_usb_transaction: Submitting urb to fd 3 failed with -1
i1d3_command: response read failed with ICOM err 0x2
coms_usb_transaction: Submitting urb to fd 3 failed with -1
i1d3_init_coms: failed with rv = 0x70062
Failed to initialise communications with instrument
or wrong instrument or bad configuration!
('Communications failure' + 'Communications failure')
urb_reaper: cleared requests

And when that happens I see this out in dmesg (the USB IDs may not line up
since I plugged and unplugged this many times):

% dmesg
[  232.711891] usb 1-1.2: usbfs: usb_submit_urb returned -28
[  232.711910] usb 1-1.2: usbfs: usb_submit_urb returned -28
[  232.784675] usb 1-1.2: reset full-speed USB device number 7 using
ehci-pci

My questions are:

1. Has anyone successfully used this model of calibrator to successfully
calibrate their display with the gnome tools?

2. Am 

Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Alan McKinnon

On 31/08/2016 17:25, Grant wrote:

 Is there a
filesystem that will make that unnecessary and exhibit better
reliability than NTFS?


Yes, FAT. It works and works well.
Or exFAT which is Microsoft's solution to the problem of very large
files on FAT.



FAT32 won't work for me since I need to use files larger than 4GB.  I
know it's beta software but should exfat be more reliable than ntfs?


It doesn't do all the fancy journalling that ntfs does, so based solely 
on complexity, it ought to be more reliable.


None of us have done real tests and mentioned it here, so we really 
don't know how it pans out in the real world.


Do a bunch of tests yourself and decide





Which NTFS system are you using?

ntfs kernel module? It's quite dodgy and unsafe with writes
ntfs-ng on fuse? I find that one quite solid



I'm using ntfs-ng as opposed to the kernel option(s).


I'm offering 10 to 1 odds that your problems came from a faulty USB 
stick, or maybe one that you yanked too soon







Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Neil Bothwick
On Wed, 31 Aug 2016 13:09:43 -0400, waltd...@waltdnes.org wrote:

> > Have you considered using cloud storage for the files instead? That
> > also gives you the option of version control with some services.  
> 
>   The initial backup of my hard drives would easily burn through my
> monthly gigabytes allotment.  Until evrybody gets truly unlimited
> bandwidth, forget about it.

Who mentioned using it for backups? I suggested as an alternative to
a USB stick for sharing a file or two between machines.

What is your monthly gigabyte allotment for your LAN? Keeping the
files on a personal cloud, or NS storage, may well be the better
alternative.


-- 
Neil Bothwick

"Time is the best teacher., unfortunately it kills all the students"


pgpGuvBew8Uo1.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread waltdnes
On Wed, Aug 31, 2016 at 08:47:11AM +0100, Neil Bothwick wrote
> On Wed, 31 Aug 2016 08:45:22 +0100, Neil Bothwick wrote:
> 
> > USB sticks are not that reliable to start with, so
> > relying on the filesystem to preserve your important files is not
> > enough. You have spent far more time on this than you would have spent
> > making backups of the file!
> 
> Have you considered using cloud storage for the files instead? That also
> gives you the option of version control with some services.

  The initial backup of my hard drives would easily burn through my
monthly gigabytes allotment.  Until evrybody gets truly unlimited
bandwidth, forget about it.

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Rich Freeman
On Wed, Aug 31, 2016 at 10:33 AM, Michael Mol  wrote:
> On Wednesday, August 31, 2016 12:12:15 AM Volker Armin Hemmann wrote:
>> Am 30.08.2016 um 23:59 schrieb Rich Freeman:
>> >
>> > That depends on the mode of operation.  In journal=data I believe
>> > everything gets written twice, which should make it fairly immune to
>> > most forms of corruption.
>>
>> nope. Crash at the wrong time, data gone. FS hopefully sane.
>
> In data=journal mode, the contents of files pass through the journal as well,
> ensuring that, at least as far as the filesystem's responsibility is 
> concerned,
> the data will be intact in the event of a crash.
>

Correct.  As with any other sane filesystem if you're using
data=journal mode with ext4 then your filesystem will always reflect
the state of data and metadata on a transaction boundary.

If you write something to disk and pull the power after fsck the disk
will either contain the contents of your files before you did the
write, or after the write was completed, and never anything
in-between.

This is barring silent corruption, which ext4 does not protect against.

> Now, I can still think of ways you can lose data in data=journal mode:

Agree, though all of those concerns apply to any filesystem.  If you
unplug an unmounted device, or pull the power when writes are pending,
or never hit save, or whatever, then your data won't end up on disk.
Now, a good filesystem should ensure that the data which is on disk is
completely consistent.  That is, you won't get half of a write, just
all or nothing.

>> >> If you want an fs that cares about your data: zfs.
>> >
>> > I won't argue that the COW filesystems have better data security
>> > features.  It will be nice when they're stable in the main kernel.
>>
>> it is not so much about cow, but integrity checks all the way from the
>> moment the cpu spends some cycles on it.

What COW does get you is the security of data=journal without the
additional cost of writing it twice.  Since data is not overwritten in
place you ensure that on an fsck the system can either roll the write
completely forwards or backwards.

With data=ordered on ext4 there is always the risk of a
half-overwritten file if you are overwriting in place.

But I agree that many of the zfs/btrfs data integrity features could
be implemented on a non-cow filesystem.  Maybe ext5 will have some of
them, though I'm not sure how much work is going into that vs just
fixing btrfs, or begging Oracle to re-license zfs.

>> Caught some silent file
>> corruptions that way. Switched to ECC ram and never saw them again.
>
> In-memory corruption of a data is a universal hazard. ECC should be the norm,
> not the exception, honestly.
>

Couldn't agree more here.  The hardware vendors aren't helping here
though, in their quest to try to make more money from those sensitive
to such things.  I believe Intel disables ECC on anything less than an
i7.  As I understand it most of the mainline AMD offerings support it
(basically anything over $80 or so), but it isn't clear to me what
motherboard support is required and the vendors almost never make a
mention of it on anything reasonably-priced.

If your RAM gets hosed than any filesystem is going to store bad data
or metadata for a multitude of reasons.  The typical x86+ arch wasn't
designed to handle hardware failures around anything associated with
cpu/ram.

The ZFS folks tend to make a really big deal out of ECC, but as far as
I'm aware it isn't any more important for ZFS than anything else.  I
think ZFS just tends to draw people really concerned with data
integrity, and once you've controlled everything that happens after
the data gets sent to the hard drive you tend to start thinking about
what happens to it beforehand.  I had to completely reinstall a
windows system not long ago due to memory failure and drive
corruption.  Wasn't that big a deal since I don't keep anything on a
windows box that isn't disposable, or backed up to something else.

-- 
Rich



Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Grant
>>  Is there a
>> filesystem that will make that unnecessary and exhibit better
>> reliability than NTFS?
>
> Yes, FAT. It works and works well.
> Or exFAT which is Microsoft's solution to the problem of very large
> files on FAT.


FAT32 won't work for me since I need to use files larger than 4GB.  I
know it's beta software but should exfat be more reliable than ntfs?


> Which NTFS system are you using?
>
> ntfs kernel module? It's quite dodgy and unsafe with writes
> ntfs-ng on fuse? I find that one quite solid


I'm using ntfs-ng as opposed to the kernel option(s).

- Grant



Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Michael Mol

On Wednesday, August 31, 2016 12:12:15 AM Volker Armin Hemmann wrote:
> Am 30.08.2016 um 23:59 schrieb Rich Freeman:
> > On Tue, Aug 30, 2016 at 4:58 PM, Volker Armin Hemmann
> > 
> >  wrote:
> >> the journal does not add any data integrity benefits at all. It just
> >> makes it more likely that the fs is in a sane state if there is a crash.
> >> Likely. Not a guarantee. Your data? No one cares.
> > 
> > That depends on the mode of operation.  In journal=data I believe
> > everything gets written twice, which should make it fairly immune to
> > most forms of corruption.
> 
> nope. Crash at the wrong time, data gone. FS hopefully sane.

No, seriously. Mount with data=ordered. Per ext4(5):

data={journal|ordered|writeback}
  Specifies the journaling mode for file data.  Metadata is always 
journaled.  To use modes other than ordered on the root filesystem, pass the 
mode to the kernel as boot parameter, e.g. rootflags=data=journal.

  journal
 All data is committed into the journal prior to being 
written into the main filesystem.

  ordered
 This is the default mode.  All data is forced directly 
out to the main file system prior to its metadata being committed to the 
journal.

  writeback
 Data ordering is not preserved – data may be written into 
the main filesystem after its metadata has been committed to the journal.  This 
is rumoured to be the highest-throughput option.  It guarantees internal 
filesystem integrity, however it can allow old data to appear in files after a 
crash and journal recovery.



In writeback mode, only filesystem metadata goes through the journal. This 
guarantees that the filesystem's structure itself will remain intact in the 
event of a crash.

In data=journal mode, the contents of files pass through the journal as well, 
ensuring that, at least as far as the filesystem's responsibility is concerned, 
the data will be intact in the event of a crash.

Now, I can still think of ways you can lose data in data=journal mode:

* You mounted the filesystem with barrier=0 or with nobarrier; this can result 
in data writes going to disk out of order, if the I/O stack supports barriers. 
If you say "my file is ninety bytes" "here are ninety bytes of data, all 9s", 
"my file is now thirty bytes", "here are thirty bytes of data, all 3s", then in 
the end you should have a thirty-byte file filled with 3s. If you have barriers 
enabled and you crash halfway through the whole process, you should find a file 
of ninety bytes, all 9s. But if you have barriers disabled, the data may hit 
disk as though you'd said "my file is ninety bytes, here are ninety bytes of 
data, all 9s, here are thirty bytes of data, all 3s, now my file is thirty 
bytes." If that happens, and you crash partway through the commit to disk, you 
may see a ninety-byte file consisting of  thirty 3s and sixty 9s. Or things may 
landthat you see a thirty-byte file of 9s.

* Your application didn't flush its writes to disk when it should have.

* Your vm.dirty_bytes or vm.dirty_ratio are too high, you've been writing a 
lot to disk, and the kernel still has a lot of data buffered waiting to be 
written. (Well, that can always lead to data loss regardless of how high those 
settings are, which is why applications should flush their writes.)

* You've used hdparm to enable write buffers in your hard disks, and your hard 
disks lose power while their buffers have data waiting to be written.

* You're using a buggy disk device that does a poor job of handling power 
loss. Such as some SSDs which don't have large enough capacitors for their own 
write reordering. Or just about any flash drive.

* There's a bug in some code, somewhere.

> 
> > f2fs would also have this benefit.  Data is not overwritten in-place
> > in a log-based filesystem; they're essentially journaled by their
> > design (actually, they're basically what you get if you ditch the
> > regular part of the filesystem and keep nothing but the journal).
> > 
> >> If you want an fs that cares about your data: zfs.
> > 
> > I won't argue that the COW filesystems have better data security
> > features.  It will be nice when they're stable in the main kernel.
> 
> it is not so much about cow, but integrity checks all the way from the
> moment the cpu spends some cycles on it. Caught some silent file
> corruptions that way. Switched to ECC ram and never saw them again.

In-memory corruption of a data is a universal hazard. ECC should be the norm, 
not the exception, honestly.

-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Neil Bothwick
On Wed, 31 Aug 2016 12:30:42 +0200, Alarig Le Lay wrote:

> On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
> > Have you considered using cloud storage for the files instead? That
> > also gives you the option of version control with some services.  
> 
> Seriously, why cloud? The Cloud is basically a marketing term that
> define “Internet, like before, but cooler”, so it’s just someone else
> computer.

Not necessarily, it's a catch-all term for network storage, it would be
ownCloud running on a LAN. However, professionally provided services are
many orders of magnitude safer than storing important files on a no-name
USB stick using a reverse engineered filesystem running through a
userspace layer.

Or you could simply use a shared folder synced with something like
SyncThing for everyone to access the files. Then you have safer hard
drive storage and some level of backup.


-- 
Neil Bothwick

DCE seeks DTE for mutual exchange of data.


pgpdLXGzYt5BN.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Rich Freeman
On Wed, Aug 31, 2016 at 6:30 AM, Alarig Le Lay  wrote:
> On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
>> Have you considered using cloud storage for the files instead? That also
>> gives you the option of version control with some services.
>
> Seriously, why cloud? The Cloud is basically a marketing term that
> define “Internet, like before, but cooler”, so it’s just someone else
> computer. I think that almost everybody here have more than on computer,
> or at least more than one hard disk drive. So… Why not using it? You
> will know who own your data.

It might have something to do with the fact that cloud services at
least run backups of their servers.

I'd be the first to agree that it is possible to do a better job
yourself at providing the sorts of services you find on dropbox,
google drive, lastpass, and so on.  However, the reality is that most
people don't actually do a better job with it, which is why I see the
occassional post on Facebook about how some relative lost all their
files when their hard drive crashed, or when some ransomware came
along.  Most who have "backups" just have a USB hard drive with some
software that came with it, which is probably always mounted.

If you know how to professionally manage a server, then sure, feel
free to DIY.  Though, you might be surprised at how many people who do
know how to professionally manage servers still use cloud services.
The plethora of clients make them convenient for some things (though I
always back them up).  And I store all my important backups encrypted
on S3 (I don't care if they lose them as long as it isn't on the same
day that I need them, and if they want to try to data mine files that
have gone through gpg I wish them good luck).

-- 
Rich



Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Alarig Le Lay
On Wed Aug 31 08:47:11 2016, Neil Bothwick wrote:
> Have you considered using cloud storage for the files instead? That also
> gives you the option of version control with some services.

Seriously, why cloud? The Cloud is basically a marketing term that
define “Internet, like before, but cooler”, so it’s just someone else
computer. I think that almost everybody here have more than on computer,
or at least more than one hard disk drive. So… Why not using it? You
will know who own your data.

-- 
alarig


signature.asc
Description: Digital signature


Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Neil Bothwick
On Wed, 31 Aug 2016 08:45:22 +0100, Neil Bothwick wrote:

> USB sticks are not that reliable to start with, so
> relying on the filesystem to preserve your important files is not
> enough. You have spent far more time on this than you would have spent
> making backups of the file!

Have you considered using cloud storage for the files instead? That also
gives you the option of version control with some services.


-- 
Neil Bothwick

Remember that the Titanic was built by experts, and the Ark by a newbie


pgp0NNX4TvHop.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: USB crucial file recovery

2016-08-31 Thread Neil Bothwick
On Tue, 30 Aug 2016 17:08:26 -0700, Grant wrote:

> > You can't control ownership and permissions of existing files with
> > mount options on a Linux filesystem. See man mount.  
> 
> So in order to use a USB stick between multiple Gentoo systems with
> ext2, I need to make sure my users have matching UIDs/GIDs? 

Yes, I said that when I first mentioned ext2.

> I think
> this is how I ended up on NTFS in the first place.  Is there a
> filesystem that will make that unnecessary and exhibit better
> reliability than NTFS?

FAT is tried and tested as long as you can live with the file size
limitations. But USB sticks are not that reliable to start with, so
relying on the filesystem to preserve your important files is not enough.
You have spent far more time on this than you would have spent making
backups of the file!


-- 
Neil Bothwick

Use Colgate toothpaste or end up with teeth like a Ferengi.


pgpoi0lUvSUNR.pgp
Description: OpenPGP digital signature