Re: tty con pantalla negra, sin login.

2020-12-30 Thread Camaleón
El 2020-12-30 a las 23:43 -0300, rocapabe escribió:

> No se que poner

¿En la terminal o en este correo? >:-)

En la terminal puedes iniciar sesión como administrador (su - ⏎ 
contraseña ⏎) y ejecutar «dmesg | tail» para ver los últimos mensajes 
que se hayan registrado...

Y en este correo puedes simplemente explicar qué problema tienes 
exactamente.

Saludos,

-- 
Camaleón 



Re: tty con pantalla negra, sin login.

2020-12-30 Thread rocapabe
No se que poner Enviado desde mi Samsung Mobile de Claro

Re: mdadm usage

2020-12-30 Thread Andy Smith
Hello,

On Wed, Dec 30, 2020 at 11:14:37PM +0100, deloptes wrote:
> Can someone recommend server or NAS grade (SATA) SSD - a reliable one for
> RAID use?

I suggest checking smartctl values on your existing device to see
how much data you write in a week or so, then convert that to the
matching units (Drive Writes Per Day or TeraBytes Written) shown in
SSD specs so you can work out what you require for however many
years of operation you expect.

Then make sure it has power loss protection.

I would expect any major brand that passes those requirements to be
fine for your use case.

Personally for SATA interface I like Samsung's SM883 or PM883 (3
DWPD vs 1.3 DWPD assuming no over provisioning), but certainly there
are much cheaper options that are still good.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: mdadm usage

2020-12-30 Thread Nicholas Geovanis
> On Wed, Dec 30, 2020, 3:08 PM deloptes  wrote:
>
>> Thomas A. Anderson wrote:
>>
>> > If hardware raid (like if I bought a controller), would it be any
>> > different, if I removed the drives and just put on one another machine
>> > -- would I be able to see the data on it like a normal drive? Or would I
>> > run into the same issue??
>>
>> if you choose using hardware raid over software raid, then the management
>> of
>> the RAID depends on the controller. This means you have to use the
>> management tools provided by the manufacturer and in order to use this
>> disk on another computer, you have to make sure that the other computer has
>> also the same controller (I mean exactly the same).
>>
>
And I suggest a very first step of making sure that the firmware on the
> RAID card is up to date. I have been in situations with Dell Perc cards
> where the array was unrecoverable after media failure.
>


> Also mdadm provides monitoring and email notification, which in case of
>> hardware raid can also be done, but is more exotic or would require
>> something like SNMP.
>> It might be common in some vendor specific server hardware, but I usually
>> avoid it on no brand PC.
>>
>
> For servers there will be more powerful logging and monitoring software
> watching, so not really necessary. But you could well do the same for a
> desktop machine.
>
>>


Re: graphical session in LXC automatically started at boot and reachable via VNC/RDP/X2GO

2020-12-30 Thread Linux-Fan

Yvan Masson writes:

[...]

What I did not understand from your answers (sorry maybe I missed something)  
is how to start the graphical session automatically when the container  
starts, so that the software can be started and listening on the network,  
and then later someone can attach to this session with VNC/RDP/X2GO. It  
seems your script Linux-Fan starts a VNC server, but does it start a session  
in it?


I did not mention it explicitly, because with the VNC server I am using  
(`tightvncserver`) does this automatically.


The trick most important steps are as follows:

* Prepare a configuration for the session of interest to start, I have these
  lines in my "Dockerfile" (i.e. image preparation stage):

echo \$vncStartup = \"exec /usr/bin/icewm\" > $HOME/.vncrc; \
[...]
printf "%s\n\n%s\n" "#!/bin/sh -e" "/usr/bin/megasync &" \
> /etc/X11/icewm/startup; \

  This way, it configures `icewm` to be the window manager to use and
  uses IceWM's autorun facility to start my program of interest (`megasync`).

* In the container's startup script, run the VNC server.

/usr/bin/vncserver -geometry 1024x768 :0

  This will start `icewm` (as configured in `.vncrc`) which (by configuration
  in `/etc/X11/icewm/startup`) runs `megasync`.

  See also: vncserver(1) --
  https://manpages.debian.org/buster/tightvncserver/tightvncserver.1.en.html

* To automatically start the container upon OS boot, I use Docker's
  `--restart=unless-stopped`. This will be different for LXC of course!

* For the gory details of setting passwords, making the right files
  executable or resuming from "crashed" sesions (i.e. delete temp files)
  check the respective source codes:

  https://github.com/m7a/lo-megasync/blob/master/megasync_ctrl.sh
  https://github.com/m7a/lo-megasync/blob/master/Dockerfile

HTH
Linux-Fan

öö


pgpEJcS3Agw8h.pgp
Description: PGP signature


Re: mdadm usage

2020-12-30 Thread deloptes
Michael Stone wrote:

> The improvement in seek times typically makes for a dramatic improvement
> in usability and user experience, regardless of maximum transfer rate.
> Replacing an SSD with an HD will usually breathe new life into an old
> system; people tend to dramatically underestimate how much I/O wait
> impacts performance. If you've got enough RAM to cache the entire
> working disk then this will be much less noticeable, except on boot, but
> that's usually not the case on desktop systems.

I was actually thinking of replacing the rotating disks yesterday at least
for the system and build environment as it has a lot of IO - I guess your
argument counts, so may be I will reconsider replacing them.

Can someone recommend server or NAS grade (SATA) SSD - a reliable one for
RAID use?



Re: mdadm usage

2020-12-30 Thread Michael Stone

On Wed, Dec 30, 2020 at 09:44:09PM +0100, deloptes wrote:

It depends. For example on the machine at home with LSI adapter that
provides the speed of SATA II I do not see any benefit of using SSD except
power saving


The improvement in seek times typically makes for a dramatic improvement 
in usability and user experience, regardless of maximum transfer rate. 
Replacing an SSD with an HD will usually breathe new life into an old 
system; people tend to dramatically underestimate how much I/O wait 
impacts performance. If you've got enough RAM to cache the entire 
working disk then this will be much less noticeable, except on boot, but 
that's usually not the case on desktop systems.




Re: No GRUB with brand-new GPU

2020-12-30 Thread Michael Stone

On Wed, Dec 30, 2020 at 04:07:01PM -0500, Felix Miata wrote:

If it works, it shouldn't need fixing, or replacing.


And yet, this entire subthread was premised on an upgrade! If you want 
to keep running old hardware then do so. Why on earth would it upset you 
that someone else isn't?



It's very disappointing software has gotten so bloated that NVME and 2^4 genuine
cores are becoming a necessity just to maintain performance.


If it makes you happy to run 10 year old hardware, just keep running 10 
year old software and you'll be all set!




Re: mdadm usage

2020-12-30 Thread deloptes
Thomas A. Anderson wrote:

> If hardware raid (like if I bought a controller), would it be any
> different, if I removed the drives and just put on one another machine
> -- would I be able to see the data on it like a normal drive? Or would I
> run into the same issue??

if you choose using hardware raid over software raid, then the management of
the RAID depends on the controller. This means you have to use the
management tools provided by the manufacturer and in order to use this disk
on another computer, you have to make sure that the other computer has also
the same controller (I mean exactly the same).
Also mdadm provides monitoring and email notification, which in case of
hardware raid can also be done, but is more exotic or would require
something like SNMP.
It might be common in some vendor specific server hardware, but I usually
avoid it on no brand PC. IMO there is more disadvantage than advantage in
using hardware raid.

Regarding you issue with recovering the raid, it could be you have to put
manually some entry in /etc/mdadm/mdadm.conf. It is just a guess, but I
have seen this when older metadata format or no metadata in older RAID
version is used. Have look at "man mdadm.conf"

The 83 for the partition used means it is marked as linux partition. The fd
was used before to help the system know that this is linux raid partition,
so that mdadm can take over and it is OK to use it today even if mdadm
evolved and can handle the partition even if it is not marked as fd.
I would assume it was older RAID version when this was created - there is no
meta or old meta format. This is why I would try manually writing the
mdadm.conf for this partition.

You can use blkid /dev/sdb1 and use the UUID to create the entry for
example:

blkid /dev/sda1
/dev/sda1: UUID="5c88d19b-4345-d8c6-e214-edc45ed40d02"
TYPE="linux_raid_member" PARTUUID="86e010a2-01"

and the record in mdadm.conf is

ARRAY /dev/md0 metadata=0.90 UUID=5c88d19b:4345d8c6:e214edc4:5ed40d02

the metadata version 0.90 is AFAIR the old format or no metadata at all.
Check the manual



Re: No GRUB with brand-new GPU

2020-12-30 Thread Felix Miata
Michael Stone composed on 2020-12-30 14:10 (UTC-0500):

> On Wed, Dec 30, 2020 at 01:35:19PM -0500, Felix Miata wrote:

> Wasting power running obsolete equipment is a personal hobby, not an 
> environmental benefit. If you're really concerned about generating new 
> waste, then repurpose someone else's discards. There's nothing obviously 
> virtuous about keeping 10 year old equipment running when 5 year old 
> equipment is heading to the landfill; the net amount of ewaste is the 
> same.

Do you send your car to a recycler every 5 years too? Your house? Does 5 years 
of
using a little less power equate to power and resources consumed creating new
equipment? If it works, it shouldn't need fixing, or replacing. If I spent what 
OP
spent to get the fastest processor and triple channel RAM, I would have expected
no less life than he's gotten so far.

It's very disappointing software has gotten so bloated that NVME and 2^4 genuine
cores are becoming a necessity just to maintain performance. e.g.: Firefox
download sizes from Mozilla.org:
1.0  8223869 2004
2.0.0.20 9710348 2008   
3.6.28  10773449 2012   32 bit
4.0.1   15581202 2011   64 bit
10.0.12 18999452 2013   first ESR series
17.0.11 23590858 2013
24.8.1  28372626 2014
31.8.0  38997268 2015
38.8.0  46359449 2016
45.9.0  51807116 2017
52.9.0  57795546 2018
60.9.0  53465985 2019
68.12.0 64977721 2020-08
78.6.0  70888611 2020-12
84.0.1  76247713 2020-12 not ESR
plus RAM requirement:
"512 MB for the 32-bit version and 2 GB for the 64-bit version" for Wintel hdwe.

and kernel:

58886019 May 2017 4.10.13
58571199 Aug 2017 4.11.8
64982767 Aug 2017 4.12.9
61065066 Nov 2017 4.13.12
62547184 Jan 2018 4.14.15
64949060 Apr 2018 4.15.13
66152028 Jun 2018 4.16.12
66781264 Aug 2018 4.17.14
66510600 Oct 2018 4.18.15
67377492 Dec 2018 4.19.12
67350588 Mar 2019 4.20.13
67262744 May 2019 5.0.13
68560788 Jul 2019 5.1.16
70516856 Sep 2019 5.2.14
81933836 Dec 2019 5.3.12
79660308 Feb 2020 5.4.14
80761600 Mar 2020 5.5.13
82639300 Jun 2020 5.6.14
83704540 Aug 75.7.12
87032455 Oct 65.8.15
86296136 Dec 13   5.9.14
-- 
Evolution as taught in public schools, like religion,
is based on faith, not on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata  ***  http://fm.no-ip.com/



Re: mdadm usage

2020-12-30 Thread deloptes
Nicholas Geovanis wrote:

> And also an argument in favor of using only SSD, now that we're past the
> first couple generations of it which were quite unreliable.

It depends. For example on the machine at home with LSI adapter that
provides the speed of SATA II I do not see any benefit of using SSD except
power saving, but when you calculate the power saved it does not cover the
extra cost of the SSD. I use the WD RED 3.5" which is forced into SATA 2.
It is simple mathematics.



Re: graphical session in LXC automatically started at boot and reachable via VNC/RDP/X2GO

2020-12-30 Thread Yvan Masson

Le 30/12/2020 à 00:11, Linux-Fan a écrit :

Yvan Masson writes:


Hi list,

I need to run a graphical software called Noethys that also listens on 
some TCP port. It:

1. needs to be reachable from the network during work hours
2. needs to be accessible remotely a few times per day, mainly by a 
Windows workstation on the LAN


[...]


I am facing the following difficulties/questions:
- running a normal X11 server in a container does not work because it 
would need to access some special files in /dev/, so it needs extra 
setup in the container and this scares me a bit


I tried that unsuccessfully a few times, too. My take: Containers are 
not for virtualizing graphics. I use VMs or even more lightweight things 
like `firejail` or `chroot` for "normal X11 server" purposes.


- however, after installing xrdp and x2go servers in the container, I 
can successfully connect remotely with these respective protocols 
without any particular setup. I would really like to find a way to 
automatically start a X11 session at boot in same way xrdp or x2go do 
it (I would then stick with this protocol)


VNC in containers works for me :) It does pretty much work as you 
described, i.e. starting the things automatically. I currently use a 
script [1], but had I known before that systemd supports containers, I 
would have possibly chosen to run it inside the container for service 
management (avoids writing one's own logic to detect stopped services 
etc.).



Has someone already done something similar? What would be your advice?


Yes, see [1]. I did it in Docker (i.e. not LXC) and it seems to work 
just fine. Some ideas:


* Make sure to consider software upgrades for the containers. I do some 
sort
   of peridoic unattended-upgrades _inside_ the container [2], but "best 
practice"

   would suggest to re-create the containers all of the time (to have them
   mostly stateless, that is).

* Consider encrypting your VNC/X11 traffic. SSH was already suggested in 
the

   thread and is newly officially available for Windows clients, too!

[1] https://github.com/m7a/lo-megasync/blob/master/megasync_ctrl.sh

[2] https://masysma.lima-city.de/32/trivial_automatic_update.xhtml

[...]

HTH
Linux-Fan

öö


Thanks for your answers!

I did a few tests, it is indeed not straightforward to run X11 in a LXC 
container… Indeed, applying security updates in the container is mandatory.


What I did not understand from your answers (sorry maybe I missed 
something) is how to start the graphical session automatically when the 
container starts, so that the software can be started and listening on 
the network, and then later someone can attach to this session with 
VNC/RDP/X2GO. It seems your script Linux-Fan starts a VNC server, but 
does it start a session in it?




Re: mdadm usage

2020-12-30 Thread Nicholas Geovanis
On Wed, Dec 30, 2020, 1:12 PM Andrei POPESCU 
wrote:

> On Mi, 30 dec 20, 13:29:05, Marc Auslander wrote:
> >
> > IMHO, there are two levels of backup.  The more common use is to undo
> > user error - deleting the wrong thing or changing something and wanting
> > to back out.  For that, backups on the same system are the most
> > convenient.  And if its on the same system, and you have raid1, you
> > don't need a separate physical drive.
> >
> > The second is of course disaster recovery, a very low probability
> > event - and I backup in the cloud and occasionally on removable media
> > for that.
>
> What's the benefit of the "first" level if you also have the "second"
> and is it worth the extra complexity?
>

The answer is yes, if my management deems the quicker turn-around on case 1
restores to be beneficial. Or beneficial for that particular server. We may
be speaking of 2 different use cases: multi-tenant server versus dedicated
backend server.

In my opinion a good backup system should be able to easily deal with
> both use cases.
>

Honestly the only place I've found that to really truly work in practice at
large scale is IBM's mainframe hierarchical storage systems. Note I said
"at scale".

Kind regards,
> Andrei
> --
> http://wiki.debian.org/FAQsFromDebianUser
>


Re: Webcam resolution in VLC (and elsewhere)

2020-12-30 Thread didier gaumet
Le mercredi 30 décembre 2020 à 16:40:06 UTC+1, Celejar a écrit :
[...]
> 1) Why does VLC default to the lower resolution? Incidentally, I see 
> that Cheese also opens the camera by default at a lower resolution - 
> but a different one: 960x540. Why? Is this to save space when recording? 

to set up the default video resolution of your webcam in Cheese, install dconf 
editor, launch it and set up the the video-x-resolution and video-y-resolution 
in /org/gnome/cheese/



Re: mdadm usage

2020-12-30 Thread Michael Stone

On Wed, Dec 30, 2020 at 09:12:08PM +0200, Andrei POPESCU wrote:

On Mi, 30 dec 20, 13:29:05, Marc Auslander wrote:


IMHO, there are two levels of backup.  The more common use is to undo
user error - deleting the wrong thing or changing something and wanting
to back out.  For that, backups on the same system are the most
convenient.  And if its on the same system, and you have raid1, you
don't need a separate physical drive.

The second is of course disaster recovery, a very low probability
event - and I backup in the cloud and occasionally on removable media
for that.


What's the benefit of the "first" level if you also have the "second"


time to restore



Re: Webcam resolution in VLC (and elsewhere)

2020-12-30 Thread didier gaumet
Le mercredi 30 décembre 2020 à 16:40:06 UTC+1, Celejar a écrit :
[...]
> 1) Why does VLC default to the lower resolution? Incidentally, I see 
> that Cheese also opens the camera by default at a lower resolution - 
> but a different one: 960x540. Why? Is this to save space when recording? 

I don't know: one possibility is that either Linux or VLC/Cheese incorrectly 
detects the webcam resolution? Another possibility could be a marketing trick: 
an higher resolution is claimed through interpolation  while actual optical 
resolution is lower.

> 2) VLC's GUI doesn't seem to offer any way to set the resolution. 
> I see all kind of references on the internet to knobs to turn to set 
> resolution, but none of them seem available in my VLC when using a v4l2 
> camera. Why? 

I use VLC from Buster. the following is the rough english translation of my VLC 
that is in french.

- To set up the resolution of the webcam for a session and begin the session:  
Media>Open a capture device, then set up the capture mode to video camera, 
click on advanced options button and fill the width and height fields.
- To set up the default resolution of the webcam: Tools>Preferences, then click 
on display parameters: all, then input/codecs>access modules>v4l, fill the 
width and height fields



Re: mdadm usage

2020-12-30 Thread Andrei POPESCU
On Mi, 30 dec 20, 13:29:05, Marc Auslander wrote:
>
> IMHO, there are two levels of backup.  The more common use is to undo
> user error - deleting the wrong thing or changing something and wanting
> to back out.  For that, backups on the same system are the most
> convenient.  And if its on the same system, and you have raid1, you
> don't need a separate physical drive.
> 
> The second is of course disaster recovery, a very low probability
> event - and I backup in the cloud and occasionally on removable media
> for that.
 
What's the benefit of the "first" level if you also have the "second" 
and is it worth the extra complexity?

In my opinion a good backup system should be able to easily deal with 
both use cases.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: No GRUB with brand-new GPU

2020-12-30 Thread Michael Stone

On Wed, Dec 30, 2020 at 01:35:19PM -0500, Felix Miata wrote:

Michael Stone composed on 2020-12-30 08:56 (UTC-0500):

On Tue, Dec 29, 2020 at 10:58:38PM -0500, Felix Miata wrote:

So people are supposed to discard or replace their older external devices just
because something else came along that may or may not actually be as well suited
to task?



Basically, yes.


Must be nice to have an unlimited money supply, and be a happy conforming member
of throwaway society. If it was good enough before, it should be good enough 
until
it breaks. Waste not, want not.


If those are concerns, don't upgrade at all...we're talking about the 
best way to upgrade a system, and I'll continue to maintain that it does 
not make sense to seek out a new motherboard with a plethora of obsolete 
interfaces in order to build a new system around ancient parts.



usable stuff, preserving the environment as best they can, seemingly a dying 
breed.


Wasting power running obsolete equipment is a personal hobby, not an 
environmental benefit. If you're really concerned about generating new 
waste, then repurpose someone else's discards. There's nothing obviously 
virtuous about keeping 10 year old equipment running when 5 year old 
equipment is heading to the landfill; the net amount of ewaste is the 
same.




Re: No GRUB with brand-new GPU

2020-12-30 Thread Andrei POPESCU
On Mi, 30 dec 20, 13:35:19, Felix Miata wrote:
> 
> Again you stripped context. Managing media is a component of most backup 
> systems.
> Labeling is an integral component of such management, as is storage of the 
> media.
> I've yet to see a USB stick storage system that accommodates either labeling 
> or
> physical storage of the wide variety of stick shapes and sizes, quite unlike
> floppies and OM. Shrinking media has downsides.

USB sticks come in various shapes and sizes though, and most of them 
have some sort of keychain loop that can be used to attach tag(s) as big 
as needed.
 
> Some people don't like fixing what ain't broke, keeping out of landfills 
> perfectly
> usable stuff, preserving the environment as best they can, seemingly a dying 
> breed.

Hardware can be recycled and new stuff is often using *much* less power 
(and space, often also completely silent because passively cooled) while 
providing better performance.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: mdadm usage

2020-12-30 Thread Marc Auslander
Reco  writes:
>
>And what purpose would it serve? IMO it's not a backup unless it's
>stored in a way that's inaccessible to the system its taken from (until
>it's actually needed of course).
>
>Reco
IMHO, there are two levels of backup.  The more common use is to undo
user error - deleting the wrong thing or changing something and wanting
to back out.  For that, backups on the same system are the most
convenient.  And if its on the same system, and you have raid1, you
don't need a separate physical drive.

The second is of course disaster recovery, a very low probability
event - and I backup in the cloud and occasionally on removable media
for that.



Re: No GRUB with brand-new GPU

2020-12-30 Thread Felix Miata
Michael Stone composed on 2020-12-30 08:56 (UTC-0500):

> On Tue, Dec 29, 2020 at 10:58:38PM -0500, Felix Miata wrote:

>>So people are supposed to discard or replace their older external devices just
>>because something else came along that may or may not actually be as well 
>>suited
>>to task?

> Basically, yes.

Must be nice to have an unlimited money supply, and be a happy conforming member
of throwaway society. If it was good enough before, it should be good enough 
until
it breaks. Waste not, want not.

> Who cares what the device names are? If you're hard coding device names 
> instead of something intrinsic to the media it's a self-inflicted wound.

Where does this presumption of hard coding come from? I like short names, eth0
rather than enp5s0 in an environment with only one or two such devices, sda 
rather
nvme0n1 when running hdparm -t or trying to explain device enumeration to a 
n00b.
Likewise I hate change for the sake of change, and name changes, e.g. Goldstar
changing to LG, KSnapshot changing to Spectacle, RCA disappearing into GE, GTE 
and
Bell Atlantic disappearing into Verizon, AT Broadband changing to Comcast,
Datsun -> Nissan, SCSI applied to *ATA devices.

>> I still prefer floppies

> That's frankly nuts--floppies were slow and unreliable 25 years ago and 
> haven't improved with age. If you value nostalgia over any rational 
> criteria that's fine for you, but they aren't relevant to most of 
> humanity.

Again you stripped context. Managing media is a component of most backup 
systems.
Labeling is an integral component of such management, as is storage of the 
media.
I've yet to see a USB stick storage system that accommodates either labeling or
physical storage of the wide variety of stick shapes and sizes, quite unlike
floppies and OM. Shrinking media has downsides. Junk accumulates according to 
the
amount of space available for it to fill. This applies to videos, mp3s and other
data. Not everything new constitutes only improvement.

Some people don't like fixing what ain't broke, keeping out of landfills 
perfectly
usable stuff, preserving the environment as best they can, seemingly a dying 
breed.
-- 
Evolution as taught in public schools, like religion,
is based on faith, not on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata  ***  http://fm.no-ip.com/



Re: mdadm usage

2020-12-30 Thread Andrei POPESCU
On Mi, 30 dec 20, 16:56:25, mick crane wrote:
> 
> I just confused myself. Initially I read somewhere that to make the raid
> first copy the OS from one disk to another.

Do you mean copy as in 'cp' or 'rsync' or similar? RAID operates at a 
lower level, typically below the filesystem (except for ZFS and btrfs, 
which integrate both).

> Just struggling to get a picture of what happens, if what's on one disk gets
> copied to the other or if they are written to simultaneously.

It should happen (more or less) simultaneously. The data is still in RAM 
so it's much faster to just write it to the other drive as well, instead 
of reading it from the first drive.

> I always understood that you can't duplicate one disk to another with itself
> being the OS.

Depends on what you understand by "duplicate".

If you mean a file-level copy, cp and/or rsync don't care whether the 
system is running or not. The problem is that files may change before 
the copy operation is completed and the resulting system is 
inconsistent[1].

As far as I know it is possible to take a "snapshot" of a running system 
with LVM, ZFS and btrfs.

> It'll be the way the software raid does it. I should probably read more.

Snapshots and RAID mirrors are different things.

[1] though this can be used to minimize downtime, e.g. by doing one 
cp/rsync pass while the system is running and a second *rsync* pass with 
the system inactive, e.g. while booted from a live system or another 
installation.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: mdadm usage

2020-12-30 Thread Reco
Hi.

On Wed, Dec 30, 2020 at 09:55:06AM -0500, Marc Auslander wrote:
> Andrei POPESCU  writes:
> >
> >Automatic mirroring / synchronizing is unsuitable for backups, because 
> >it will also sync accidental changes to files (including deletions) or 
> >filesystem corruptions in case of power outage or system crash (that may 
> >lead to corrupted files or entire directories "disappearing").
> 
> BUT - once you have hardware reliable storage (raid1) you can do backups
> into the same disks you are backing up!

And what purpose would it serve? IMO it's not a backup unless it's
stored in a way that's inaccessible to the system its taken from (until
it's actually needed of course).

Reco



Re: mdadm usage

2020-12-30 Thread Dan Ritter
Marc Auslander wrote: 
> Andrei POPESCU  writes:
> >
> >Automatic mirroring / synchronizing is unsuitable for backups, because 
> >it will also sync accidental changes to files (including deletions) or 
> >filesystem corruptions in case of power outage or system crash (that may 
> >lead to corrupted files or entire directories "disappearing").
> >
> ...
> 
> BUT - once you have hardware reliable storage (raid1) you can do backups
> into the same disks you are backing up!

No. Not even in jest.

-dsr-



Re: mdadm usage

2020-12-30 Thread mick crane

On 2020-12-30 13:38, Andy Smith wrote:

Hi Mick,

On Tue, Dec 29, 2020 at 03:32:07PM +, mick crane wrote:

On 2020-12-29 13:10, Andy Smith wrote:
>The default metadata format (v1.2) for mdadm is at the beginning of
>the device. If you've put a filesystem directly on the md device
>then the presence of the metadata will prevent it being recognised
>as a simple filesystem. What you can do is force mdadm to import it
>as a degraded RAID-1.
<..>
I've puzzled about this. Are you supposed to have 3 disks ?
One for the OS and the other 2 for the raid1 ?


It's unclear to me how what you have quoted relates to your
subsequent question. Can you elaborate on what the location of mdadm
metadata has to do with whether one puts the operating system on a
RAID-1 device or not?

I expect that most people using RAID put their OS into the RAID as
well. I certainly do. I don't understand your mental separation of
"OS" and "RAID-1" or why it might mandate 3 devices. It is perfectly
straightforward to put a single partition¹ on each of two devices,
RAID-1 them together and use it as root filesystem and that's it.

Probably we are just misunderstanding each other and there is a
question here that I haven't understood.


I just confused myself. Initially I read somewhere that to make the raid 
first copy the OS from one disk to another.
Just struggling to get a picture of what happens, if what's on one disk 
gets copied to the other or if they are written to simultaneously.
I always understood that you can't duplicate one disk to another with 
itself being the OS.

It'll be the way the software raid does it. I should probably read more.

mick

--
Key ID4BFEBB31



Webcam resolution in VLC (and elsewhere)

2020-12-30 Thread Celejar
Hi,

I'm puzzled by VLC's handling of my webcam. When I open it using
default settings, either via the GUI or CLI ('vlc v4l2://'), it opens
(according to 'Tools / Media Information / Codec') in 848x480. The
camera supports HD, however, and I can get that by 'vlc
v4l2://:width=1280:height=720' (or by making up random higher numbers
for width and height).

1) Why does VLC default to the lower resolution? Incidentally, I see
that Cheese also opens the camera by default at a lower resolution -
but a different one: 960x540. Why? Is this to save space when recording?

2) VLC's GUI doesn't seem to offer any way to set the resolution.
I see all kind of references on the internet to knobs to turn to set
resolution, but none of them seem available in my VLC when using a v4l2
camera. Why?

VLC's documentation is often terrible, and in this case, it says:

"For Video4Linux devices, you can set the name of the video and audio
devices using the "Video device name" and "Audio device name" text
inputs. The "Advanced options..." button allows you to select some
further settings useful in some rare cases, such as the chroma of the
input (the way colors are encoded) and the size of the input buffer."

https://wiki.videolan.org/Documentation:Open_Media/#Play_from_an_acquisition_card

So does this mean what I think it means? The standard settings don't
have a resolution knob, and neither do the advanced ones (which should
only be necessary "in some rare cases" anyway)?

Here's VLC's v4l2 module's (CLI) documentation:

https://wiki.videolan.org/Documentation:Modules/v4l2/

So there's no way to just tell VLC to use the highest resolution, and
the user has to somehow figure out that he has to first discover the
offered resolutions via something like lsusb or Cheese [0], and then enter
them manually via the 'width' and 'height' parameters?!

Who designed this thing's UIs? At least Cheese offers a relatively
straightforward way of setting the resolution.

[0] 
https://askubuntu.com/questions/214977/how-can-i-find-out-the-supported-webcam-resolutions

Celejar



Re: mdadm usage

2020-12-30 Thread Marc Auslander
Andrei POPESCU  writes:
>
>Automatic mirroring / synchronizing is unsuitable for backups, because 
>it will also sync accidental changes to files (including deletions) or 
>filesystem corruptions in case of power outage or system crash (that may 
>lead to corrupted files or entire directories "disappearing").
>
...

BUT - once you have hardware reliable storage (raid1) you can do backups
into the same disks you are backing up!



Re: Fichero de configuración de red

2020-12-30 Thread Camaleón
El 2020-12-30 a las 13:48 +, Oliver escribió:

> estoy conociendo el nuevo método de configuración de red a través de
> systemd. He generado un fichero lan0.network en /etc/systemd/network/. La
> IP la ha cogido bien, pero no está cogiendo los valores de la variable
> Domains. He desinstalado el paquete de network-manager para evitar
> conflictos y renombrado el fichero interfaces del método viejo. El que sí
> existe es resolv.conf pero está sin configurar.

Acuérdate de levantar todos los servicios de systemd que trabajen con 
la red, más específicamente «systemd-resolved» que es quien debe leer 
estas variables que has definido en el archivo de configuración. Y de 
reiniciar todos los servicos de red cuando hagas algún cambio en los 
archivos de texto para que actualicen los nuevos valores.
 
> ¿Tenéis algún ejemplo del fichero .network que incluya estos datos que
> pregunto? Cuando hago ping a un nombre de máquina no resuelve por no añadir
> automáticamente el sufijo dns que he puesto en Domains.

Hum... según el manual debería ser algo así:

[Network]
Domains=domain.local domain.site domain.mydomain

El formato y el uso es similar al campo «search» del archivo 
«/etc/resolv.conf».

Para depurar los servidores DNS usa mejor «dig» o «host». Y para ver lo
que hace systemd:

How to troubleshoot DNS with systemd-resolved?
https://unix.stackexchange.com/questions/328131/how-to-troubleshoot-dns-with-systemd-resolved

Saludos,

-- 
Camaleón 



Re: mdadm usage

2020-12-30 Thread Nicholas Geovanis
On Wed, Dec 30, 2020, 8:03 AM Nicholas Geovanis 
wrote:

>
>
> On Wed, Dec 30, 2020, 7:39 AM Jesper Dybdal 
> wrote:
>
>> I would hope that the most recently modified half of the array would be
>> the one to overwrite the least recently modified one, so that a
>> temporary absence of one disk which later comes back unmodified, will
>> not destroy data.
>>
>> Is that how it works?
>>
>
> .
> the failed drive is simply removed and destroyed, then replaced with a
> brand new drive and re-mirrored. If you have hundreds or thousands of
> servers, weelll. there's another good argument for virtualisation. Said
> your boss :-)
>

And also an argument in favor of using only SSD, now that we're past the
first couple generations of it which were quite unreliable.

-- 
>> Jesper Dybdal
>> https://www.dybdal.dk
>>
>>


mdadm usage

2020-12-30 Thread Nicholas Geovanis
On Wed, Dec 30, 2020, 7:39 AM Jesper Dybdal 
wrote:

> I would hope that the most recently modified half of the array would be
> the one to overwrite the least recently modified one, so that a
> temporary absence of one disk which later comes back unmodified, will
> not destroy data.
>
> Is that how it works?
>

You would hope that it works that way. Trouble is, you can't necesssrily
tell how far the failed drive got in the last write to it. And cause
failure occurred, mdadm can't always tell either. In addition, unfixable
corruption may have taken place in the filesystem above that, such that
even a succesful mdadm recovery still doesn't get your data back. So the
backup on separate media is crucial.

Some places I've worked use mdadm only for RAID performance reasons. But
any failure recovery is a restore from off site backup. They don't even try
RAID recovery. Or as I have done in the past, the other half of the RAID 1
is put directly into use, the failed drive is simply removed and destroyed,
then replaced with a brand new drive and re-mirrored. If you have hundreds
or thousands of servers, weelll. there's another good argument for
virtualisation. Said your boss :-)

-- 
> Jesper Dybdal
> https://www.dybdal.dk
>
>


Re: mdadm usage

2020-12-30 Thread Andy Smith
Hello,

On Wed, Dec 30, 2020 at 02:38:46PM +0100, Jesper Dybdal wrote:
> I would hope that the most recently modified half of the array would be the
> one to overwrite the least recently modified one, so that a temporary
> absence of one disk which later comes back unmodified, will not destroy
> data.

Consider what happens if you take an mdadm RAID-1 member and put it
in another machine, mount it and then start writing to it. The
events count of the device will increase with each write.

If you then take that device and put it back in the original
machine, and it has a higher event count than the device in the
machine already, on next assemble mdadm will overwrite the device
that has the lower event count.

You can see the event count with:

# mdadm --examine /dev/sda1 # or whatever the member device is

So yes in one way your idea that the most recently modified half is
the one chosen could be said to be correct, if by "most recently
modified" you actually mean "most number of events".

As you were thinking, it is pretty safe to do if you never write to
the device you take out.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: No GRUB with brand-new GPU

2020-12-30 Thread Michael Stone

On Tue, Dec 29, 2020 at 10:58:38PM -0500, Felix Miata wrote:

So people are supposed to discard or replace their older external devices just
because something else came along that may or may not actually be as well suited
to task?


Basically, yes. If I'm provisioning a new system, it seems stupid to 
build it around 10 year old storage media which is much closer to the 
end of its operational life than the beginning. It's just not rational 
to choose an option which has a higher failure rate, lower speed, higher 
cost per megabyte, etc., outside of extremely specialized requirements.
You might not have noticed, but SATA basically stopped evolving a decade 
ago. The last attempt to really move SATA forward was SATA express, 
which was DOA and (AFAIK) never implemented on the drive side. There 
have been a few tweaks to the SATA standard since then, but it's 
effectively in legacy mode. Why? Because NVMe makes much more sense for 
modern storage media, and SATA's attempt to play in that space (SATA 
express) was already too slow and too late a decade ago. So if you buy 
something today it's going to be either slow/archival/cheap SATA or 
M.2/U.2 NVMe. When SATA stopped evolving, it made no more sense to keep 
eSATA going and now USB/thunderbolt are much faster as well as being 
more universal and having more functionality. The reason you don't see 
motherboards with 10 different kinds of ports on the back is that the 
couple of ports that remain do everything the old ones did, but faster 
and cheaper (by reducing BOM).


Anyway, if you've got some sort of niche requirement like eSATA or 
firewire, get an adapter. An internal/external sata converter is 
literally a couple of bucks. In the context of the thread, there's still 
an existing system that's working fine, so keeping legacy devices on 
there is also an option. If you're determined that you need every 
obsolete interface (plus, presumably, new interfaces) built in to a new
motherboard that's certainly your right, but you're really limiting your 
options and honestly showing signs of being an IT hoarder. 


Obsolete, like beauty, is in the eye of the beholder/user. I want my external
disks on the same bus as my internal disks, not the USB bus, which when used for
disks too commonly results in musical device names, and reboots that need to be
redone because of a USB stick that failed to get removed first.


Who cares what the device names are? If you're hard coding device names 
instead of something intrinsic to the media it's a self-inflicted wound.



I still prefer floppies


That's frankly nuts--floppies were slow and unreliable 25 years ago and 
haven't improved with age. If you value nostalgia over any rational 
criteria that's fine for you, but they aren't relevant to most of 
humanity.




Re: mdadm usage

2020-12-30 Thread Andy Smith
On Wed, Dec 30, 2020 at 11:42:37AM +0100, Thomas A. Anderson wrote:
> When i enter mdadm --examine /dev/sdb
> 
> I get:
> 
> /dev/sdb:
> 
>     MBR Magic: aa55
> 
> Partition[0] : 3907026944 sectors at         2048 (type 83)

It would say more than that if sdb had ever been an md RAID member.

Are you sure it was sdb? Could it have been a partition on sdb?
"fdisk -l /dev/sdb" to list partitions. Also be really careful that
sdb really is the device you think it is!

> If hardware raid (like if I bought a controller), would it be any
> different, if I removed the drives and just put on one another machine
> -- would I be able to see the data on it like a normal drive? Or would I
> run into the same issue??

You would run into the same issue but it would be worse because the
other computer would have to have the same brand (possibly even the
exact same model) of hardware RAID. Every RAID system has to put
metadata onto the devices.

With mdadm, the structure on disk is public information. If you run
into difficulty you can get help from a wide pool of people. I have
seen data brought back from some truly disastrous situations in
threads on the linux-raid mailing list (where mostly md-related
things are discussed).

Try the same thing with hardware RAID and your only port of call is
the manufacturer's support desk because the layout of your data is
now proprietary information. For most of us the support desks of
such vendors don't work out well.

In many cases hardware RAID performs better, especially if you get
one with a supercap-backed write cache, but the trend these days is
to do Just a Bunch of Disks (JBOD) with software RAID, btrfs or
zfs.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Fichero de configuración de red

2020-12-30 Thread Oliver
Hola,

estoy conociendo el nuevo método de configuración de red a través de
systemd. He generado un fichero lan0.network en /etc/systemd/network/. La
IP la ha cogido bien, pero no está cogiendo los valores de la variable
Domains. He desinstalado el paquete de network-manager para evitar
conflictos y renombrado el fichero interfaces del método viejo. El que sí
existe es resolv.conf pero está sin configurar.

¿Tenéis algún ejemplo del fichero .network que incluya estos datos que
pregunto? Cuando hago ping a un nombre de máquina no resuelve por no añadir
automáticamente el sufijo dns que he puesto en Domains.

Gracias de antemano.


Re: mdadm usage

2020-12-30 Thread The Wanderer
On 2020-12-30 at 08:38, Andy Smith wrote:

> Hi Mick,
> 
> On Tue, Dec 29, 2020 at 03:32:07PM +, mick crane wrote:
> 
>> On 2020-12-29 13:10, Andy Smith wrote:
>> 
>>> The default metadata format (v1.2) for mdadm is at the beginning
>>> of the device. If you've put a filesystem directly on the md
>>> device then the presence of the metadata will prevent it being
>>> recognised as a simple filesystem. What you can do is force mdadm
>>> to import it as a degraded RAID-1.

>> I've puzzled about this. Are you supposed to have 3 disks ? One for
>> the OS and the other 2 for the raid1 ?
> 
> It's unclear to me how what you have quoted relates to your 
> subsequent question. Can you elaborate on what the location of mdadm 
> metadata has to do with whether one puts the operating system on a 
> RAID-1 device or not?

I initially parsed this question as asking whether the metadata is
supposed to be stored on something other than the disk(s) involved in
the RAID-1.

Reading it again, it might be a reaction to the final sentence about
"force mdadm to import it"; in order to be able to do that, you need to
have the system booted to a running OS, and in order for that to happen
the OS can't be on the RAID that you're trying to import. Which would
then require at least three disks, two for the RAID-1 (of which you're
only using one, when importing) and one for the OS of the system you're
running when performing the import.

-- 
   The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man. -- George Bernard Shaw



signature.asc
Description: OpenPGP digital signature


Re: mdadm usage

2020-12-30 Thread Jesper Dybdal



On 2020-12-30 02:21, Michael Stone wrote:

On Wed, Dec 30, 2020 at 01:53:59AM +0100, deloptes wrote:

you can start one of the drives that was member of raid1 array on any
computer, just as Reco said by assembling the mdraid on that computer.


Just be very careful about ever putting both halves of the array on 
the same computer again, as one will overwrite the other, potentially 
automatically, and not necessarily in a direction you like.




I would hope that the most recently modified half of the array would be 
the one to overwrite the least recently modified one, so that a 
temporary absence of one disk which later comes back unmodified, will 
not destroy data.


Is that how it works?

--
Jesper Dybdal
https://www.dybdal.dk



Re: mdadm usage

2020-12-30 Thread Andy Smith
Hi Mick,

On Tue, Dec 29, 2020 at 03:32:07PM +, mick crane wrote:
> On 2020-12-29 13:10, Andy Smith wrote:
> >The default metadata format (v1.2) for mdadm is at the beginning of
> >the device. If you've put a filesystem directly on the md device
> >then the presence of the metadata will prevent it being recognised
> >as a simple filesystem. What you can do is force mdadm to import it
> >as a degraded RAID-1.
> <..>
> I've puzzled about this. Are you supposed to have 3 disks ?
> One for the OS and the other 2 for the raid1 ?

It's unclear to me how what you have quoted relates to your
subsequent question. Can you elaborate on what the location of mdadm
metadata has to do with whether one puts the operating system on a
RAID-1 device or not?

I expect that most people using RAID put their OS into the RAID as
well. I certainly do. I don't understand your mental separation of
"OS" and "RAID-1" or why it might mandate 3 devices. It is perfectly
straightforward to put a single partition¹ on each of two devices,
RAID-1 them together and use it as root filesystem and that's it.

Probably we are just misunderstanding each other and there is a
question here that I haven't understood.

Cheers,
Andy

¹ You can just use the devices as well, no partitions, but sometimes
  weird things can happen with BIOSes that think this is an invalid
  configuration, so I wouldn't recommend it for devices you will
  boot from.



Re: APT não conecta no servidor do repositório

2020-12-30 Thread Helio Loureiro
Qual a saída do comando:

curl  http://deb.debian.org/debian/

Abs,
Helio Loureiro
http://helio.loureiro.eng.br
http://br.linkedin.com/in/helioloureiro
http://twitter.com/helioloureiro

On Fri, 25 Dec 2020 at 20:19, Tom Mostard  wrote:

>
> Já tive esse mesmo problema e resolvi testando o DNS.
>
>
> Em ter., 22 de dez. de 2020 às 17:41, Moksha Tux 
> escreveu:
>
>> O curioso pessoal é que um outro servidor ligado na mesma rede com a
>> mesma versão do Debian não acontece isso. Muito obrigado Paulo pela dica
>> mas nada mudou
>>
>> Em ter., 22 de dez. de 2020 às 16:53, Paulo 
>> escreveu:
>>
>>> Amigo,
>>>
>>> Não é uma caso de rota?
>>>
>>> Que DNS está usando? Google (8.8.8.8 e 8.8.4.4)?
>>>
>>> Tem algo no seu /etc/hosts que sobreescreva?
>>> traceroute http://deb.debian.org/debian
>>>
>>> traceroute http://ftp.br.debian.org/debian
>>> Att,
>>>
>>> Paulo Correia
>>>
>>>
>>> Em 22/12/2020 16:26, Moksha Tux escreveu:
>>>
>>> Muito obrigado Lucas pela ajuda mas já fiz essa troca e não muda em nada
>>>
>>>
>>> Em ter., 22 de dez. de 2020 às 16:24, Lucas Castro <
>>> lu...@gnuabordo.com.br> escreveu:
>>>
 Tenta mudar o repositório

 de http://deb.debian.org/debian buster para algum outro,

 como http://ftp.br.debian.org/debian buster
 On 12/22/20 1:53 PM, Moksha Tux wrote:

 Boa tarde meus caros!

 Estou com um problema que não consegui achar um solução no google e já
 tem um tempinho. è o seguinte toda vez que eu tento utilizar o APT em meu
 servidor Debian 10 aparece um erro que acredito que seja de conexão com o
 servidor e fica um tempão e retorna um erro dizendo que não conseguiu se
 conectar com o servidor do mirror, vejam abaixo:

 fica assim um tempão:
 Obter:1 http://security.debian.org/debian-security buster/updates
 InRelease [65,4 kB]
 Obter:2 http://deb.debian.org/debian buster InRelease [121 kB]
 Obter:1 http://security.debian.org/debian-security buster/updates
 InRelease [65,4 kB]

 Depois retorna isso:
 Err:2 http://deb.debian.org/debian buster InRelease
  Conexão falhou [IP: 151.101.178.132 80]

 E volta para o prompt, alguém aqui poderia me salvar por favor?

 Grato

 --
 Lucas Castro




Re: mdadm usage

2020-12-30 Thread Alexander V. Makartsev

On 30.12.2020 15:42, Thomas A. Anderson wrote:

it could all very well be, that I have borked these two drives. It's not
the end of the world, there was no data loss, and the data was already
transferred off of them. And, my lesson has been learned. RAID sounds
all good and dandy, and does provide some protection against data loss
caused by hardware failure.
It appears to be misconfigured, because at least partition type should 
be "fd". Not "83" as in your case, which looks like VMWare Storage 
partition type.

I'm not a VMWare expert so I can't give any advice here.
Now it looks like you've dealing with some advanced system configuration 
and virtualization, not with a simple one like plain hardware and 2 drives.



Which bring me to my final question, just for closure.

If hardware raid (like if I bought a controller), would it be any
different, if I removed the drives and just put on one another machine
-- would I be able to see the data on it like a normal drive? Or would I
run into the same issue??
The only advantage hardware raid controller (not a simple HBA) could 
offer is a performance. But it also adds an additional point of failure 
- the raid controller itself.
When hardware raid controller fails, in most cases, you have to replace 
it with the same model, or if same model won't be available due to EoL 
or discontinuity,
with a compatible one from the same make\brand, to reassemble RAID array 
on the drives and access data.
The trend of the industry seems to be going away from hardware raid 
solutions, replacing them with software defined solutions.


In case of software defined RAID, like "mdraid" or "Microsoft Storage 
Spaces", you can reassemble array and access data on another machine.
Speaking of "mdraid", with properly configured RAID1, you can even use a 
single drive from the array to access data on another machine.
If this will be a drive with working OS, it will boot up normally and 
report about degraded state of RAID1 array and about missing one (or 
more if it's a three-way mirror) of it's member drives.



--
With kindest regards, Alexander.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: mdadm usage

2020-12-30 Thread Dan Ritter
Thomas A. Anderson wrote: 
> Which bring me to my final question, just for closure.
> 
> If hardware raid (like if I bought a controller), would it be any
> different, if I removed the drives and just put on one another machine
> -- would I be able to see the data on it like a normal drive? Or would I
> run into the same issue??

Much worse.

If you had done this correctly, you can take a set of drives
from a working mdadm system and bring them to another computer
and use mdadm to import them and use them. Depending on which
options you chose and which drives you brought over, you could
potentially do this with fewer drives than "all".

(The same is true of ZFS and BTRFS.)

On every hardware RAID system that I know of, you need the same
hardware RAID on the receiving system -- sometimes just a
compatible card, sometimes precisely the same card.

-dsr-



Re: mdadm usage

2020-12-30 Thread Thomas A. Anderson
When i enter mdadm --examine /dev/sdb

I get:

/dev/sdb:

    MBR Magic: aa55

Partition[0] : 3907026944 sectors at         2048 (type 83)


So I thought I was good. I then tried to reassemble:

mdadm -A -R /dev/md0 /dev/sdb  

I get:

mdadm: cannot assembler mbr metadata on /dev/sdb

So, I tried it on sdb1

mdadm: /dev/sdb1 has no superblock - assembly aborted


it could all very well be, that I have borked these two drives. It's not
the end of the world, there was no data loss, and the data was already
transferred off of them. And, my lesson has been learned. RAID sounds
all good and dandy, and does provide some protection against data loss
caused by hardware failure.


Which bring me to my final question, just for closure.

If hardware raid (like if I bought a controller), would it be any
different, if I removed the drives and just put on one another machine
-- would I be able to see the data on it like a normal drive? Or would I
run into the same issue??


On 30.12.20 01:53, deloptes wrote:
> Thomas A. Anderson wrote:
>
>> Once I can get anything off one of these two drives, I will then switch
>> my current setup to 1 drive (8TB), with an attached 8TB drive that will
>> be be backup --weekly, or whatever.
> you can start one of the drives that was member of raid1 array on any
> computer, just as Reco said by assembling the mdraid on that computer.
>
> Check mdadm man page.
>
> For example I attached a disk that was member of raid through a USB storage
> box:
>
> * Examine the RAID table
>
>  mdadm --examine /dev/sdj
>  /dev/sdj:
> MBR Magic : aa55
>  Partition[0] :   997376 sectors at 2048 (type fd)
>  Partition[1] :156250112 sectors at   999424 (type fd)
>  Partition[2] :   1796272130 sectors at157251582 (type 05)
>
> * Assemble and run the RAID array with one disk
>
>  mdadm -A -R /dev/md0 /dev/sdj1
>  mdadm: /dev/md0 has been started with 1 drive (out of 2).
>  
>  mdadm -A -R /dev/md1 /dev/sdj2
>  mdadm: /dev/md1 has been started with 1 drive (out of 2).
>  
>  mdadm -A -R /dev/md2 /dev/sdj5
>  mdadm: /dev/md2 has been started with 1 drive (out of 2).
>
> Now I can do whatever I want with those partitions
>



Re: No GRUB with brand-new GPU

2020-12-30 Thread tomas
On Tue, Dec 29, 2020 at 10:58:38PM -0500, Felix Miata wrote:

[...]

> So people are supposed to discard or replace their older external devices just
> because something else came along that may or may not actually be as well 
> suited
> to task?

"Ending is better than mending"

  -- Aldous Huxley, "Brave New World", 1932

It seems sleep-learning has been successful since then ;-D

Cheers
 - t


signature.asc
Description: Digital signature


Re: Debian 10 doing WEIRD THINGS

2020-12-30 Thread Andrei POPESCU
On Mi, 30 dec 20, 00:05:55, Kanito 73 wrote:
> 
> What can it be? What can I do? Would you recommend to perform a new 
> REAL AND FULL INSTALLATION (not restore of backed up brand new 
> installation) with latest Debian 10? 

Yes.

(unless the backup is pure Debian, without any third-party software 
(including Google Chrome)).

If you really, really need third-party software on your Debian you could 
ask the list for advice, mentioning exactly what you need it for and why 
the software included in Debian is inadequate.


Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: mdadm usage

2020-12-30 Thread deloptes
Michael Stone wrote:

> Just be very careful about ever putting both halves of the array on the
> same computer again, as one will overwrite the other, potentially
> automatically, and not necessarily in a direction you like.

yes this is true. when you add the device you should mark it as faulty



Re: mdadm usage

2020-12-30 Thread Andrei POPESCU
On Ma, 29 dec 20, 21:37:01, Thomas A. Anderson wrote:
> 
> I have been using it for years, and while not a bad "thing," in
> retrospect, not sure it actually meant my critieria (eh, who knows,
> maybe it did), but now I have a more clear use case, basically to have a
> clean backup. I was using RAID1 as a "pseudo" backup, which I guess kind
> of worked, but I should use a specific backup solution.

Automatic mirroring / synchronizing is unsuitable for backups, because 
it will also sync accidental changes to files (including deletions) or 
filesystem corruptions in case of power outage or system crash (that may 
lead to corrupted files or entire directories "disappearing").

(speaking from experience here)

See also http://taobackup.com

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature