Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma

On 12/23/21 19:36, the...@sys-concept.com wrote:
[snip]


I deleted the that file: /run/user/1000/dconf/user (owned as root) it was empty 
anyhow.
Logout / Login and system recreated that file with my home user name.

But I still can not open "ps" file with evince. I'm getting an error message:

evince W-9\ Form.ps

(evince:19345): GLib-GObject-WARNING **: 19:26:40.904: invalid cast from 
'GtkFileChooserNative' to 'GtkWidget'

Worth to note that I just upgraded my system.



Solved:
The new 'evince' package was installed without "postscript" support, enable it solved the 
problem with evince but hylafax viewer "YajHFC" still has a problem viewing PS files.
Before upgrade I was able to view PS files with evince.

Now I'm getting strange output:

yajhfc.file.FileConverter$ConversionException: Non-zero exit code of 
GhostScript (1):
Error: /undefined in .setpdfwrite
Operand stack:

Execution stack:
   %interp_exit   .runexec2   --nostringval--   --nostringval--   
--nostringval--   2   %stopped_push   --nostringval--   --nostringval--   
--nostringval--   false   1   %stopped_push   .runexec2   --nostringval--   
--nostringval--   --nostringval--   2   %stopped_push   --nostringval--
Dictionary stack:
   --dict:776/1123(ro)(G)--   --dict:0/20(G)--   --dict:75/200(L)--
Current allocation mode is local
GPL Ghostscript 9.55.0: Unrecoverable error, exit code 1

at 
yajhfc.file.GhostScriptMultiFileConverter.convertMultiplePSorPDFFiles(GhostScriptMultiFileConverter.java:101)
at 
yajhfc.file.MultiFileConverter.convertMultipleFiles(MultiFileConverter.java:144)
at 
yajhfc.file.MultiFileConverter.convertMultipleFilesToSingleFile(MultiFileConverter.java:206)
at 
yajhfc.file.MultiFileConverter.viewMultipleFiles(MultiFileConverter.java:181)
at yajhfc.MainWin$ShowWorker.doWork(MainWin.java:498)
at yajhfc.util.ProgressWorker.run(ProgressWorker.java:189)


YajHFC 0.6.1
Java 1.8.0_312 (Temurin)
OpenJDK Runtime Environment 1.8.0_312-b07
OpenJDK 64-Bit Server VM
Linux 5.10.61-gentoo (amd64)



Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma

On 12/23/21 17:54, eric wrote:

On 12/23/21 5:40 PM, the...@sys-concept.com wrote:

On 12/23/21 17:19, Mark Knecht wrote:

On Thu, Dec 23, 2021 at 4:53 PM  wrote:


On 12/23/21 15:51, Spackman, Chris wrote:

On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens
and closes instantly.


My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support viewing
.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)



When I try to open "ps" file with evince I'm getting an error:

evince W-9\ Form.ps

(evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
'/run/user/1000/dconf/user': Permission denied.  dconf will not work properly.



Have you Googled it? I think this error has been all over the place recently.

- Mark


Who suppose to be the owner of the file: /run/user/1000/dconf/user

-rw--- 1 root root 2 Dec 23 16:48 /run/user/1000/dconf/user




Shouldn't the owner be who ever has the uid of 1000? That is how it is on my 
system.

Regards,

Eric


I deleted the that file: /run/user/1000/dconf/user (owned as root) it was empty 
anyhow.
Logout / Login and system recreated that file with my home user name.

But I still can not open "ps" file with evince. I'm getting an error message:

evince W-9\ Form.ps

(evince:19345): GLib-GObject-WARNING **: 19:26:40.904: invalid cast from 
'GtkFileChooserNative' to 'GtkWidget'
 
Worth to note that I just upgraded my system.




Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread Miles Malone
It shouldnt be root, should be your current user.  Looks like you've
run a GUI program as root in your current session maybe, which has
created a root-owned dconf/user.

On Fri, 24 Dec 2021 at 10:40,  wrote:
>
> On 12/23/21 17:19, Mark Knecht wrote:
> > On Thu, Dec 23, 2021 at 4:53 PM  wrote:
> >>
> >> On 12/23/21 15:51, Spackman, Chris wrote:
> >>> On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:
>  I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1
> 
>  When I try to view a postsript file from command line: gs W-9_Form.ps
>  It works OK, but when I try to open same file in Xfce desktop it opens
>  and closes instantly.
> >>>
> >>> My guess would be that in XFCE, gs is successfully doing whatever
> >>> (showing, interpreting, ??) in a terminal window and then immediately
> >>> closing when done.
> >>>
> >>> Have you tried a GUI such as Okular or Evince? They both support viewing
> >>> .ps files.
> >>>
> >>> In Thunar, just right click on the file and choose "Open with >" and
> >>> either Okular or Evince if they are listed, or "Open with Other
> >>> Application ..." if they aren't. (But if they aren't listed, you might
> >>> have to install them, or a similar GUI viewer.)
> >>>
> >>
> >> When I try to open "ps" file with evince I'm getting an error:
> >>
> >> evince W-9\ Form.ps
> >>
> >> (evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
> >> '/run/user/1000/dconf/user': Permission denied.  dconf will not work 
> >> properly.
> >>
> >
> > Have you Googled it? I think this error has been all over the place 
> > recently.
> >
> > - Mark
> >
> Who suppose to be the owner of the file: /run/user/1000/dconf/user
>
> -rw--- 1 root root 2 Dec 23 16:48 /run/user/1000/dconf/user
>
>



Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread eric

On 12/23/21 5:40 PM, the...@sys-concept.com wrote:

On 12/23/21 17:19, Mark Knecht wrote:

On Thu, Dec 23, 2021 at 4:53 PM  wrote:


On 12/23/21 15:51, Spackman, Chris wrote:

On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens
and closes instantly.


My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support 
viewing

.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)



When I try to open "ps" file with evince I'm getting an error:

evince W-9\ Form.ps

(evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
'/run/user/1000/dconf/user': Permission denied.  dconf will not work 
properly.




Have you Googled it? I think this error has been all over the place 
recently.


- Mark


Who suppose to be the owner of the file: /run/user/1000/dconf/user

-rw--- 1 root root 2 Dec 23 16:48 /run/user/1000/dconf/user




Shouldn't the owner be who ever has the uid of 1000? That is how it is 
on my system.


Regards,

Eric



Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma

On 12/23/21 17:19, Mark Knecht wrote:

On Thu, Dec 23, 2021 at 4:53 PM  wrote:


On 12/23/21 15:51, Spackman, Chris wrote:

On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens
and closes instantly.


My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support viewing
.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)



When I try to open "ps" file with evince I'm getting an error:

evince W-9\ Form.ps

(evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
'/run/user/1000/dconf/user': Permission denied.  dconf will not work properly.



Have you Googled it? I think this error has been all over the place recently.

- Mark


Who suppose to be the owner of the file: /run/user/1000/dconf/user

-rw--- 1 root root 2 Dec 23 16:48 /run/user/1000/dconf/user




Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma





Thelma
On 12/23/21 17:19, Mark Knecht wrote:

On Thu, Dec 23, 2021 at 4:53 PM  wrote:


On 12/23/21 15:51, Spackman, Chris wrote:

On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens
and closes instantly.


My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support viewing
.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)



When I try to open "ps" file with evince I'm getting an error:

evince W-9\ Form.ps

(evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
'/run/user/1000/dconf/user': Permission denied.  dconf will not work properly.



Have you Googled it? I think this error has been all over the place recently.

- Mark


Yes, I did; but couldn't find a solution yet.




Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread Mark Knecht
On Thu, Dec 23, 2021 at 4:53 PM  wrote:
>
> On 12/23/21 15:51, Spackman, Chris wrote:
> > On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:
> >> I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1
> >>
> >> When I try to view a postsript file from command line: gs W-9_Form.ps
> >> It works OK, but when I try to open same file in Xfce desktop it opens
> >> and closes instantly.
> >
> > My guess would be that in XFCE, gs is successfully doing whatever
> > (showing, interpreting, ??) in a terminal window and then immediately
> > closing when done.
> >
> > Have you tried a GUI such as Okular or Evince? They both support viewing
> > .ps files.
> >
> > In Thunar, just right click on the file and choose "Open with >" and
> > either Okular or Evince if they are listed, or "Open with Other
> > Application ..." if they aren't. (But if they aren't listed, you might
> > have to install them, or a similar GUI viewer.)
> >
>
> When I try to open "ps" file with evince I'm getting an error:
>
> evince W-9\ Form.ps
>
> (evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
> '/run/user/1000/dconf/user': Permission denied.  dconf will not work properly.
>

Have you Googled it? I think this error has been all over the place recently.

- Mark



Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma

On 12/23/21 15:51, Spackman, Chris wrote:

On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens
and closes instantly.


My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support viewing
.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)



When I try to open "ps" file with evince I'm getting an error:

evince W-9\ Form.ps

(evince:6866): dconf-CRITICAL **: 16:50:27.034: unable to create file 
'/run/user/1000/dconf/user': Permission denied.  dconf will not work properly.



Re: [gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread Spackman, Chris
On 2021/12/23 at 01:57pm, the...@sys-concept.com wrote:
> I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1
> 
> When I try to view a postsript file from command line: gs W-9_Form.ps
> It works OK, but when I try to open same file in Xfce desktop it opens
> and closes instantly.

My guess would be that in XFCE, gs is successfully doing whatever
(showing, interpreting, ??) in a terminal window and then immediately
closing when done.

Have you tried a GUI such as Okular or Evince? They both support viewing
.ps files.

In Thunar, just right click on the file and choose "Open with >" and
either Okular or Evince if they are listed, or "Open with Other
Application ..." if they aren't. (But if they aren't listed, you might
have to install them, or a similar GUI viewer.)

> I use hylafax + YajHFC and when I try to open some files I get an error:

I honestly have no idea about hylafax and YajHFC. Unless there is more
here than just trying to view a .ps file (or you are working in a very
restricted environment), they are probably not the best tool.

-- 
Chris Spackman (he / him)   ch...@osugisakae.com

ESL Coordinator The Graham Family of Schools
ESL EducatorColumbus State Community College
Japan Exchange and Teaching Program   Wajima, Ishikawa 1995-1998
Linux user since 1998 Linux User #137532




Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Wols Lists

On 23/12/2021 21:50, Mark Knecht wrote:

In the case of astrophotography I will have multiple copies of the
original photos. The process of stacking the individual photos can
create gigabytes of intermediate files but as long as the originals
are safe then it's just a matter of starting over. In my astrophotography
setup I create about 50Mbyte per minute and take pictures for hours
so a set of photos coming in at 1-2GB and up to maybe 10GB isn't
uncommon. I might create 30-50GB of intermediate files which
eventually get deleted but they can reside on the server while I'm
working. None of that has to be terribly fast.


:-)

Seeing as I run lvm, that sounds a perfect use case. Create an LV, dump 
the files on it, when you're done unmount and delete the LV.


I'm thinking of pulling the same stunt with wherever gentoo dumps its 
build files etc. Let it build up til I think I need a clearout, then 
create a new lv and scrap the old one.


Cheers,
Wol



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Mark Knecht
On Thu, Dec 23, 2021 at 10:27 AM Rich Freeman  wrote:
>
> On Thu, Dec 23, 2021 at 11:56 AM Mark Knecht  wrote:
> >

>
> > Instead
> > of a ZIL in machine 1 the SSD becomes a ZLOG cache most likely holding
> > a cached copy of the currently active astrophotography projects.
>
> I think you're talking about L2ARC.  I don't think "ZLOG" is a thing,
> and a log device in ZFS is just another name for ZIL (since that's
> what it is - a high performance data journal).
>

Thank you. Yes, L2ARC.

> L2ARC drives don't need to be mirrored and their failure is harmless.
> They generally only improve things, but of course they do nothing to
> improve write performance - just read performance.
>
> >As always I'm interested in your comments about what works or
> > doesn't work about this sort of setup.
>
> Ultimately it all comes down to your requirements and how you use
> stuff.  What is the impact to you if you lose this real-time audio
> recording?  If you will just have to record something over again but
> that isn't a big deal, then what you're doing sounds fine to me.

Actually, no.

>   If
> you are recording stuff that is mission-critical and can't be repeated
> and you're going to lose a lot of money or reputation if you lose a
> recording, then I'd have that recording machine be pretty reliable
> which means redundant everything (server grade hardware with fault
> tolerance and RAID/etc, or split the recording onto two redundant sets
> of cheap consumer hardware).

Closer to mission critical.

When recording live music, most especially in situations with
lots of musicians, you don't want to miss a good take. In cases where
you are just capturing a band playing it's just about getting it on disk,
however in cases where you are adding to music that's already on disk,
say a vocalist singing live over the top of music the band played earlier
then having the hardware screw up a good take is really a downer.

>
> I do something similar - all the storage I care about is on
> Linux/ZFS/lizardfs with redundancy and backup.  I do process
> photos/video on a windows box on an NVMe, but that is almost never the
> only copy of my data.  I might offload media to the windows box from
> my camera, but if I lose that then I still have the camera.  I might
> do some processing on windows like generating thumbnails/etc on NVMe
> before I move it to network storage.  In the end though it goes to zfs
> on linux and gets backed up and so on.  If I need to process some
> videos I might copy data back to a windows NVMe for more performance
> if I don't want to directly spool stuff off the network, but my risks
> are pretty minimal if that goes down at any point.  And this is just
> personal stuff - I care about it and don't want to lose it, but it
> isn't going to damage my career if I lose it.  If I were dealing with
> data professionally it still wouldn't be a bad arrangement but I might
> invest in a few things differently.
>

In the case of recording audio it just gets down to how large a
project you are working on. 3 minute pop songs aren't much of an
issue. 10-20 stereo tracks at 96KHz isn't all that large. For those
the audio might fit in DRAM. However if you're working on some
wonderful 30 minute prog rock piece with 100 or more stereo tracks
it can get a lot larger but (in my mind anyway) the main desktop
machine will have some sort of M.2 and maybe it fits in there
and it gets read off hard disk before the session starts and there's
probably no problem.

I haven't given this a huge amount of worry because my current
machine does an almost perfect job with 8-9 year old technology.

In the case of astrophotography I will have multiple copies of the
original photos. The process of stacking the individual photos can
create gigabytes of intermediate files but as long as the originals
are safe then it's just a matter of starting over. In my astrophotography
setup I create about 50Mbyte per minute and take pictures for hours
so a set of photos coming in at 1-2GB and up to maybe 10GB isn't
uncommon. I might create 30-50GB of intermediate files which
eventually get deleted but they can reside on the server while I'm
working. None of that has to be terribly fast.

> Just ask yourself what hardware needs to fail for you to lose
> something you care about at any moment of time.  If you can tolerate
> the loss of just about any individual piece of hardware that's a
> pretty good first step for just about anything, and is really all you
> need for most consumer stuff.  Backups are fine as long as they're
> recent enough and you don't mind redoing work.
>
Agreed.

Thanks,
Mark



[gentoo-user] viewer for "ps" postscript files

2021-12-23 Thread thelma

I have latest ghostscript installed: ghostscript-gpl-9.55.0-r1

When I try to view a postsript file from command line: gs W-9_Form.ps
It works OK, but when I try to open same file in Xfce desktop it opens and 
closes instantly.

I use hylafax + YajHFC
and when I try to open some files I get an error:

yajhfc.file.FileConverter$ConversionException: Non-zero exit code of 
GhostScript (1):
Error: /undefined in .setpdfwrite
Operand stack:

Execution stack:
   %interp_exit   .runexec2   --nostringval--   --nostringval--   
--nostringval--   2   %stopped_push   --nostringval--   --nostringval--   
--nostringval--   false   1   %stopped_push   .runexec2   --nostringval--   
--nostringval--   --nostringval--   2   %stopped_push   --nostringval--
Dictionary stack:
   --dict:776/1123(ro)(G)--   --dict:0/20(G)--   --dict:75/200(L)--
Current allocation mode is local
GPL Ghostscript 9.55.0: Unrecoverable error, exit code 1

at 
yajhfc.file.GhostScriptMultiFileConverter.convertMultiplePSorPDFFiles(GhostScriptMultiFileConverter.java:101)
at 
yajhfc.file.MultiFileConverter.convertMultipleFiles(MultiFileConverter.java:144)
at 
yajhfc.file.MultiFileConverter.convertMultipleFilesToSingleFile(MultiFileConverter.java:206)
at 
yajhfc.file.MultiFileConverter.viewMultipleFiles(MultiFileConverter.java:181)
at yajhfc.MainWin$ShowWorker.doWork(MainWin.java:498)
at yajhfc.util.ProgressWorker.run(ProgressWorker.java:189)


YajHFC 0.6.1
Java 1.8.0_312 (Temurin)
OpenJDK Runtime Environment 1.8.0_312-b07
OpenJDK 64-Bit Server VM
Linux 5.10.61-gentoo (amd64)

--
Thelma



Re: [gentoo-user] Apparently 2.4 is not >= 2.2?

2021-12-23 Thread Jack

On 2021.12.22 17:31, Steven Lembark wrote:

On Tue, 21 Dec 2021 20:21:17 -0500
Jack  wrote:

> I may well be wront, but it looks like the problems are not due to
> version, but to python-target mismatches.  You may need to rebuild
> some stuff first, such as pytest-runner and mako.

You are probably right: I've spent more time playing with
PYTHON*TARGET variables and succesive rebuilds on this machine
than actually doing any work for about a year. Annoyance is
that I don't do anything else with Python here so it's only
for the package manager.

This is for a fresh build from a recent stage3. It shouldn't
be this painful to just start a new system.
With a new system, I would think you should not need to alter the  
profile defaults in any way.  Once you do anything manually to those  
python variables, it likely devolves into a black hole.


I would find any python-target related variables in any portage config  
files, and seriously consider removing them.


As someone recently suggested in a different thread, start with just  
"emerge @system" then "emerge @world" (without any non-default options)  
and only when that is done, start adding back the --newuse and --deep  
options.


The other replies have also been good - in terms of where to start  
looking.  Also - which versions of python do you have installed, and  
which do you actually need?




Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Wols Lists

On 23/12/2021 16:56, Mark Knecht wrote:

Rich & Wols,
Thanks for the responses. I'll post a single response here. I had
thought of the need to mirror the ZIL but didn't have enough physical
disk slots in the backup machine for the 2nd SSD. I do think this is a
critical point if I was to use the ZIL at all.


Okay, how heavily are you going to hammer the server writing to it? If 
you aren't going to stress it, don't bother with the ZIL.


Based on inputs from the two of you I'm investigating a different
overall setup for my home network:

Previously - a new main desktop that holds all my data. Lots of disk
space, lots of data. All of my big data work - audio recording
sessions and astrophotography - are done on this machine. Two
__backup__ machines. Desktop machines are backed up to machine 1,
machine 1 backed up to machine 2, machine 2 eventually backed up to
some cloud service.

Now - a new desktop machine that holds audio recording data currently
being recorded and used due to real-time latency requirements.


Sounds good...

< Two new

network machines: Machine 1 would be both a backup machine as well as
a file server. The file server portion of this machine holds
astrophotography data and recorded video files. PixInsight running on
my desktop accesses and stores over the network to machine 1. Instead
of a ZIL in machine 1 the SSD becomes a ZLOG cache most likely holding
a cached copy of the currently active astrophotography projects.


Actually, it sounds like the best use of the SSD would be your working 
directory in your desktop.



Machine 1 may also run a couple of VMs over time.


Whatever :-) Just make sure that it's easy to back up! I'd be inclined 
to have a bunch of raid-5'd disks ...


 Machine 2 is a pure

backup machine of everything on Machine 1.

I'd say don't waste your money. You don't need a *third* machine. Spend 
the money on some large disk drives, an eSATA card for machine 1, and a 
hard disk docking station ...



FYI - Machine 1 will always be located close to my desktop machines
and use the 1Gb/S wired network. iperf suggests I get about 850Mb/S on
and off of Machine 1. Machine 2 will be remote and generally backed up
overnight using wireless.

As always I'm interested in your comments about what works or
doesn't work about this sort of setup.

My main desktop/server currently has two 4TB drives split 1TB/3TB. The 
two 3TB partitions are raid-5'd with a 3TB drive to give me 6TB of /home 
space.


I'm planning to buy an 8TB drive as a backup. The plan is it will go 
into a test-bed machine, that will be used for all sorts of stuff, but 
it will at least keep a copy of my data off my main machine.


But you get the idea. If you get two spare drives you can back up on to 
them. I don't know what facilities ZFS offers for sync'ing filesystems, 
but if you're go somewhere regularly, where you can stash a hard disk 
(even a shed down the bottom of the garden :-), you back up onto disk 1, 
swap it for disk 2, back up on to disk 1, swap it for disk 2 ...


AND YOUR BACKUP IS OFF SITE!

Cheers,
Wol



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Rich Freeman
On Thu, Dec 23, 2021 at 12:39 PM Mark Knecht  wrote:
>
> I'll respond to Rich's points in a bit but on this point I think
> you're both right - new SSDs are very very reliable and I'm not overly
> worried, but it seems a given that forcing more and more writes to an
> SSD has to up the probability of a failure at some point. Zero writes
> is almost no chance of failure, trillions of writes eventually wears
> something out.
>

Every SSD has a rating for total writes.  This varies and the ones
that cost more will get more writes (often significantly more), and
wear pattern matters a great deal.  Chia fortunately seems to have
died off pretty quickly but there is still a ton of data from those
who were speculating on it, and they were buying high end SSDs and
treating them as expendable resources - and plotting Chia is actually
a fairly ideal use case as you write a few hundred GB and then you
trim it all when you're done, so the entirety of the drive is getting
turned over regularly.  People plotting Chia were literally going
through cases of high-end SSDs due to write wear, running them until
failure in a matter of weeks.

Obviously if you just write something and read it back constantly then
wear isn't an issue.

Just googled the Samsung Evo 870 and they're rated to 600x their
capacity in writes, for example.  If you write 600TB to the 1TB
version of the drive, then it is likely to fail on you not too long
after.

Sure, it is a lot better than it used to be, and for typical use cases
I agree that they last longer than spinning disks.  However, a ZIL is
not a "typical use case" as such things are measured.

-- 
Rich



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Mark Knecht
On Thu, Dec 23, 2021 at 10:35 AM Wols Lists  wrote:
>
> On 23/12/2021 17:26, Rich Freeman wrote:
> > Plus it is an SSD that you're forcing a lot of writes
> > through, so that is going to increase your risk of failure at some
> > point.
>
> A lot of people can't get away from the fact that early SSDs weren't
> that good. And I won't touch micro-SD for that reason. But all the
> reports now are that a decent SSD is likely to outlast spinning rust.
>
> Cheers,
> Wol
>

I'll respond to Rich's points in a bit but on this point I think
you're both right - new SSDs are very very reliable and I'm not overly
worried, but it seems a given that forcing more and more writes to an
SSD has to up the probability of a failure at some point. Zero writes
is almost no chance of failure, trillions of writes eventually wears
something out.

Mark



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Wols Lists

On 23/12/2021 17:26, Rich Freeman wrote:

Plus it is an SSD that you're forcing a lot of writes
through, so that is going to increase your risk of failure at some
point.


A lot of people can't get away from the fact that early SSDs weren't 
that good. And I won't touch micro-SD for that reason. But all the 
reports now are that a decent SSD is likely to outlast spinning rust.


Cheers,
Wol



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Rich Freeman
On Thu, Dec 23, 2021 at 11:56 AM Mark Knecht  wrote:
>
>Thanks for the responses. I'll post a single response here. I had
> thought of the need to mirror the ZIL but didn't have enough physical
> disk slots in the backup machine for the 2nd SSD. I do think this is a
> critical point if I was to use the ZIL at all.

Yeah, I wouldn't run ZIL non-mirrored, especially if your underlying
storage is mirrored.  The whole point of sync is to sacrifice
performance for reliability, and if all it does is force the write to
the one device in the array that isn't mirrored that isn't helping.
Plus if you're doing a lot of syncs then that ZIL could have a lot of
data on it.  Plus it is an SSD that you're forcing a lot of writes
through, so that is going to increase your risk of failure at some
point.

Nobody advocates for non-mirrored ZIL, at least if your array itself
is mirrored.

> Instead
> of a ZIL in machine 1 the SSD becomes a ZLOG cache most likely holding
> a cached copy of the currently active astrophotography projects.

I think you're talking about L2ARC.  I don't think "ZLOG" is a thing,
and a log device in ZFS is just another name for ZIL (since that's
what it is - a high performance data journal).

L2ARC drives don't need to be mirrored and their failure is harmless.
They generally only improve things, but of course they do nothing to
improve write performance - just read performance.

>As always I'm interested in your comments about what works or
> doesn't work about this sort of setup.

Ultimately it all comes down to your requirements and how you use
stuff.  What is the impact to you if you lose this real-time audio
recording?  If you will just have to record something over again but
that isn't a big deal, then what you're doing sounds fine to me.  If
you are recording stuff that is mission-critical and can't be repeated
and you're going to lose a lot of money or reputation if you lose a
recording, then I'd have that recording machine be pretty reliable
which means redundant everything (server grade hardware with fault
tolerance and RAID/etc, or split the recording onto two redundant sets
of cheap consumer hardware).

I do something similar - all the storage I care about is on
Linux/ZFS/lizardfs with redundancy and backup.  I do process
photos/video on a windows box on an NVMe, but that is almost never the
only copy of my data.  I might offload media to the windows box from
my camera, but if I lose that then I still have the camera.  I might
do some processing on windows like generating thumbnails/etc on NVMe
before I move it to network storage.  In the end though it goes to zfs
on linux and gets backed up and so on.  If I need to process some
videos I might copy data back to a windows NVMe for more performance
if I don't want to directly spool stuff off the network, but my risks
are pretty minimal if that goes down at any point.  And this is just
personal stuff - I care about it and don't want to lose it, but it
isn't going to damage my career if I lose it.  If I were dealing with
data professionally it still wouldn't be a bad arrangement but I might
invest in a few things differently.

Just ask yourself what hardware needs to fail for you to lose
something you care about at any moment of time.  If you can tolerate
the loss of just about any individual piece of hardware that's a
pretty good first step for just about anything, and is really all you
need for most consumer stuff.  Backups are fine as long as they're
recent enough and you don't mind redoing work.

-- 
Rich



Re: [gentoo-user] Synchronous writes over the network.

2021-12-23 Thread Mark Knecht
On Mon, Dec 20, 2021 at 12:52 PM Rich Freeman  wrote:
>
> On Mon, Dec 20, 2021 at 1:52 PM Mark Knecht  wrote:
> >
> > I've recently built 2 TrueNAS file servers. The first (and main) unit
> > runs all the time and serves to backup my home user machines.
> > Generally speaking I (currently) put data onto it using rsync but it
> > also has an NFS mount that serves as a location for my Raspberry Pi to
> > store duplicate copies of astrophotography pictures live as they come
> > off the DSLR in the middle of the night.
> >
> > ...
> >
> > The thing is that the ZIL is only used for synchronous writes and I
> > don't know whether anything I'm doing to back up my user machines,
> > which currently is just rsync commands, is synchronous or could be
> > made synchronous, and I do not know if the NFS writes from the R_Pi
> > are synchronous or could be made so.
> >
>
> Disclaimer: some of this stuff is a bit arcane and the documentation
> isn't very great, so I could be missing a nuance somewhere.
>
> First, one of your options is to set sync=always on the zfs dataset,
> if synchronous behavior is strongly desired.  That will force ALL
> writes at the filesystem level to be synchronous.  It will of course
> also normally kill performance but the ZIL may very well save you if
> your SSD performs adequately.  This still only applies at the
> filesystem level, which may be an issue with NFS (read on).
>
> I'm not sure how exactly you're using rsync from the description above
> (rsyncd, directly client access, etc).  In any case I don't think
> rsync has any kind of option to force synchronous behavior.  I'm not
> sure if manually running a sync on the server after using rsync will
> use the ZIL or not.  If you're using sync=always then that should
> cover rsync no matter how you're doing it.
>
> Nfs is a little different as both the server-side and client-side have
> possible asynchronous behavior.  By default the nfs client is
> asynchronous, so caching can happen on the client before the file is
> even sent to the server.  This can be disabled with the mount option
> sync on the client side.  That will force all data to be sent to the
> server immediately.  Any nfs server or filesystem settings on the
> server side will not have any impact if the client doesn't transmit
> the data to the server.  The server also has a sync setting which
> defaults to on, and it additionally has another layer of caching on
> top of that which can be disabled with no_wdelay on the export.  Those
> server-side settings probably delay anything getting to the filesystem
> and so they would have precedence over any filesystem-level settings.
>
> As you can see you need to use a bit of a kill-it-with-fire approach
> to get synchronous behavior, as it traditionally performs so poorly
> that everybody takes steps to try to prevent it from happening.
>
> I'll also note that the main thing synchronous behavior protects you
> from is unclean shutdown of the server.  It has no bearing on what
> happens if a client goes down uncleanly.  If you don't expect server
> crashes it may not provide much benefit.
>
> If you're using ZIL you should consider having the ZIL mirrored, as
> any loss of the ZIL devices will otherwise cause data loss.  Use of
> the ZIL is also going to create wear on your SSD so consider that and
> your overall disk load before setting sync=always on the dataset.
> Since the setting is at the dataset level you could have multiple
> mountpoints and have a different sync policy for each.  The default is
> normal POSIX behavior which only syncs when requested (sync, fsync,
> O_SYNC, etc).
>
> --
> Rich
>

Rich & Wols,
   Thanks for the responses. I'll post a single response here. I had
thought of the need to mirror the ZIL but didn't have enough physical
disk slots in the backup machine for the 2nd SSD. I do think this is a
critical point if I was to use the ZIL at all.

   Based on inputs from the two of you I'm investigating a different
overall setup for my home network:

Previously - a new main desktop that holds all my data. Lots of disk
space, lots of data. All of my big data work - audio recording
sessions and astrophotography - are done on this machine. Two
__backup__ machines. Desktop machines are backed up to machine 1,
machine 1 backed up to machine 2, machine 2 eventually backed up to
some cloud service.

Now - a new desktop machine that holds audio recording data currently
being recorded and used due to real-time latency requirements. Two new
network machines: Machine 1 would be both a backup machine as well as
a file server. The file server portion of this machine holds
astrophotography data and recorded video files. PixInsight running on
my desktop accesses and stores over the network to machine 1. Instead
of a ZIL in machine 1 the SSD becomes a ZLOG cache most likely holding
a cached copy of the currently active astrophotography projects.
Machine 1 may also run a couple of VMs over time. Machine 2 is a pure
backup 

Re: [gentoo-user] Movie editing softeware

2021-12-23 Thread Spackman, Chris
On 2021/12/21 at 07:17pm, Wols Lists wrote:
> On 21/12/2021 18:49, Spackman, Chris wrote:

> > 2b. press the "export video" button at the bottom of the window. Here,
> >  for me, the defaults work fine.
> 
> The problem is 2b. For me, it's an extremely simple case of "I gave you 
> a dot ts file, I want a dot ts file back".
> 
> The act of importing the ts file into the project seems to throw that 
> information away. I know a .ts is some sort of a container, with streams 
> and whatnot, but I don't have a clue what's in it. Why should I?
> 
> All I know is I want to end up with EXACTLY the same sort of file I 
> started with, and this seems exactly what most video editors don't have 
> a clue how to do!
> 
> (Like a word .docx - I don't give a monkeys what's inside it, I don't 
> need to, word takes care of all that. Why can't any half-decent video 
> editor do the same?)
> 
> And yes, I have tried. You're hearing the screams of frustration from 
> countless failed attempts.

Video files are certainly horribly complex. I promise I am no expert at
all, but I have been fooling with them for decades, so I suppose I
probably know a lot more than I realize, and more than most people who
haven't been at it that long.

I think the problem is that the files have both a container and a
format. Matroska, if i understand correctly, is a container. It could
hold video, audio, and even subtitles, in any of several formats.

This is unlike a DOCX file, for example, which is always a zip file with
xml (and other) files in expected formats. The closest to the situation
you are seeing is if MS Office opened an ODT file (from LibreOffice) and
always saved it - without asking - as a DOCX file.

Even more out there would be if LO would accept ODT files that were
tarred and gzipped instead of just plain zipped and that also could have
html, markdown, or org formats instead of xml inside the tar.gz
file. (that would an interesting world, i think)

I hadn't realized it until you brought it up, but it is odd that so many
video programs don't have a "save in the same format as the original"
option. I'm sure ffmpeg can probably do it easily, but then we're back
to the original issues with trying to get the cutting lined up neatly.

Good luck.

-- 
Chris Spackman (he / him)   ch...@osugisakae.com

ESL Coordinator The Graham Family of Schools
ESL EducatorColumbus State Community College
Japan Exchange and Teaching Program   Wajima, Ishikawa 1995-1998
Linux user since 1998 Linux User #137532




Re: [gentoo-user] Re: Movie editing softeware

2021-12-23 Thread Wols Lists

On 23/12/2021 07:58, Neil Bothwick wrote:

On Wed, 22 Dec 2021 20:39:59 +, Wols Lists wrote:


Now emerging! I shall have to play with it, but it looks just what the
doctor ordered. I *believe* a ts contains an mpeg2 ... let's hope!


AFAIR recall a .ts (Transport Stream) file is intended for broadcast and
so contains more redundant information to allow for unreliable
transmission - MythTV records .ts files. Converting for MPEG without
reencoding gives the same video but in a smaller file.

Quite likely. But if I want to replay it on the same tv (and don't want 
to spend hours recoding), it seems like the best solution - that works - 
is to leave it as it is.


Video is enough of a maze of twisty little passages as it is, i don't 
want to get lost again ... :-)


Cheers,
Wol