Re: GIMP keeps crashing!

2020-10-02 Thread riveravaldez
On 10/2/20, Rebecca Matthews  wrote:
> ```
> GNU Image Manipulation Program version 2.10.8
> git-describe: GIMP_2_10_6-294-ga967e8d2c2
> C compiler:
> Using built-in specs.
> COLLECT_GCC=gcc
> COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/8/lto-wrapper
> OFFLOAD_TARGET_NAMES=nvptx-none
> OFFLOAD_TARGET_DEFAULT=1
> Target: x86_64-linux-gnu
> Configured with: ../src/configure -v --with-pkgversion='Debian 8.2.0-13'
> --with-bugurl=file:///usr/share/doc/gcc-8/README.Bugs
> --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr
> --with-gcc-major-version-only --program-suffix=-8
> --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id
> --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix
> --libdir=/usr/lib --enable-nls --enable-clocale=gnu
> --enable-libstdcxx-debug --enable-libstdcxx-time=yes
> --with-default-libstdcxx-abi=new --enable-gnu-unique-object
> --disable-vtable-verify --enable-libmpx --enable-plugin
> --enable-default-pie --with-system-zlib --with-target-system-zlib
> --enable-objc-gc=auto --enable-multiarch --disable-werror
> --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32
> --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none
> --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu
> --host=x86_64-linux-gnu --target=x86_64-linux-gnu
> Thread model: posix
> gcc version 8.2.0 (Debian 8.2.0-13)

Hi, Rebecca, this is Debian's users list, not GIMP developers bug
tracker. Maybe you should check what are the best ways to inform of a
problem and ask for some help -always in a gentle and respectful
manner, because this is just a voluntary community effort.
Anyway, you're not giving any relevant information, e.g., Debian
version in use, symptoms of that crashing, the output of a terminal '$
gimp', etc.
Probably with more (some) info about your issue someone will aport an
idea to solve it.
Best regards.



Re: can't boot to a graphical interface.

2020-10-02 Thread David
On Sat, 3 Oct 2020 at 03:24, Frank McCormick  wrote:

> While compiling an application today my Debian bullseye system somehow
> got messed up. It will boot to a CLI but no X, apparently because for
> some reason the system is unable to access some files in
> /usr/share/dbus-1. It keeps saying access denied. The directories and
> files are owned by root, and if I noot to a CLI I have no trouble
> accessing them using sudo and midnight commander.

> I tried reinstalling systemd but it ended with the same problem.

> Can anyone help?

1. Restore your operating system backup, or do a fresh install of Debian,
   perhaps in a virtual machine.
2. Repeat the steps that caused the problem, using 'script' to
capture all the commands and their output.
3. Post the 'script' capture here, or in a pastebin if it is too big for email.
4. Restore your operating system backup, or do a fresh install of Debian.
5. Restore your data backup.



Re: Problema instalación Debian VirtualBox

2020-10-02 Thread Eduardo Visbal
Amigo algo debes tener mal, yo uso virtualbox en un windows; y tengo creado
Debian 10 en el y funciona sin problemas; lo instale desde la iso
netinstall.




*Eduardo VisbalLinuxero #440451http://esdebianfritto.blogspot.com/
*


El vie., 2 oct. 2020 a las 23:12, remgasis remgasis ()
escribió:

>
>
> On Fri, Oct 2, 2020, 8:03 PM Juan Miguel Gross 
> wrote:
>
>> Hola! Tengo una consulta si en este lugar me pueden contestar... Estoy
>> tratando de instalar Debian en  VirtualBox en Windows y al llegar a la
>> parte "Instalando man-db" o algo así, se queda colgado el instalador o me
>> tira un error, probé con varios ISO pero con todos es igual, que podría
>> hacer?  Desde ya, muchas gracias!
>>
>
> Pudiera ser un problema con la configuración de la vm en el virtual box o
> con los drivers del virtual box. Intenta con vmware workstation ... a mi
> ese no me ha fallado.
>
>>


Re: Problema instalación Debian VirtualBox

2020-10-02 Thread remgasis remgasis
On Fri, Oct 2, 2020, 8:03 PM Juan Miguel Gross  wrote:

> Hola! Tengo una consulta si en este lugar me pueden contestar... Estoy
> tratando de instalar Debian en  VirtualBox en Windows y al llegar a la
> parte "Instalando man-db" o algo así, se queda colgado el instalador o me
> tira un error, probé con varios ISO pero con todos es igual, que podría
> hacer?  Desde ya, muchas gracias!
>

Pudiera ser un problema con la configuración de la vm en el virtual box o
con los drivers del virtual box. Intenta con vmware workstation ... a mi
ese no me ha fallado.

>


Re: Problema instalación Debian VirtualBox

2020-10-02 Thread Juan Lavieri

Hola.

El 2/10/2020 a las 7:44 p. m., Juan Miguel Gross escribió:
Hola! Tengo una consulta si en este lugar me pueden contestar... Estoy 
tratando de instalar Debian enVirtualBox en Windows y al llegar a la 
parte "Instalando man-db" o algo así, se queda colgado el instalador o 
me tira un error, probé con varios ISO pero con todos es igual, que 
podría hacer?  Desde ya, muchas gracias!



Por favor puedes indicar exactamente cuál es el error.


Ayudaría también saber, que versión estás instalando, qué tipo de 
instalación es (netinstall, etc).  ¿De dónde descargaste la iso? 
¿chequeaste la integridad de la misma?



Saludos.
--
Errar es de humanos, pero es mas humano culpar a los demás



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread David Christensen

On 2020-10-02 17:16, Linux-Fan wrote:

David Christensen writes:


The Fujitsu might do PCIe/NVMe 4X M.2 or U.2 SSD's with the right 
adapter card.


Been there, failed at that:


https://www.reichelt.de/pcie-x8-karte-zu-2x-nvme-m-2-key-m-lp-delock-90305-p256917.html?=pos_3=1 



I added two SSDs, a Crucial P5 SSD 2TB, M.2 NVMe and a Seagate FireCuda 
510 SSD 2TB M.2 PCIe (all ordered together) and started the server. 
Nothing was recognized at the OS level but opening up the 1U case showed 
a fault indicator LED at the PCIe slot where I had added the new card.


Perhaps a different brand adapter card would give better results.


Rather than a new VM server and a new workstation, perhaps a new 
workstation with enough memory and fast local working storage would be 
adequate for both purposes.


Maybe; I will get some prices for comparision... In terms of the base 
model price I do not expect there to be much difference between the 
server and the workstation with the same computation power, but if the 
workstation allows custom HDDs while staying under warranty it might be 
much cheaper.


Buying a new major brand server/ workstation with all the parts 
installed at the factory is going to be expensive.



As other readers have mentioned, a small business that builds to order 
should have better prices and may offer "quiet" systems.



In addition to U.2 drives, Intel makes server systems:

https://www.intel.com/content/www/us/en/products/servers/server-chassis-systems.html


And, Intel seems to be related to the Clear Linux distribution:

https://clearlinux.org/


I would expect an Intel server with Intel drives and Clear Linux should 
be a good combination.  Perhaps you should research Clear Linux and/or 
ask the community.



David



Re: [HS] recherche boite mail

2020-10-02 Thread david
J'ai (enfin) pris le temps de contacter tous ces hébergeurs, j'ai 
l'impression que c'est exactement ce que je cherchais, merci !



Le 2020-08-29 18:46, hamster a écrit :

Le 29/08/2020 à 12:21, David a écrit :

Peut-être l'un d'entre vous a une autre proposition qu'OVH pour
héberger un site pour publier des pdf et des textes bruts, des mp3 et
quelques vidéos légères
(j'ai "créé" le site avec prestashop, pour diffuser du contenu
gratuitement avec une présentation en boutique)
Et sur lequel je pourrai créer plusieurs boites mails, chacune avec
des alias ?


https://www.lautre.net/
https://ouvaton.coop/
http://www.marsnet.org/
https://chatons.org/
y'en a d'autres du meme genre si on cherche on en trouve, un jour
j'avais commencé a en faire une liste mais depuis je l'ai pas tenue a 
jour


politique de libertés individuelles et de vie privée irréprochables, 
par

contre la gratuité… c'est pas une bonne idée




Re: Mounting /dev/shm noexec

2020-10-02 Thread Steve McIntyre
Andy Smith wrote:

...

>Though note that it seems systemd once did use "noexec" for /dev/shm
>but stopped 10 years ago because it broke some uses of mmap:
>
>
> https://github.com/systemd/systemd/commit/501c875bffaef3263ad42c32485c7fde41027175

libffi also has a habit of using /dev/shm for writing temporary
trampolines for cross-language calls, and they need to be executable.

-- 
Steve McIntyre, Cambridge, UK.st...@einval.com
"You can't barbecue lettuce!" -- Ellie Crane



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Linux-Fan

David Christensen writes:


On 2020-10-02 04:18, Linux-Fan wrote:

David Christensen writes:


On 2020-10-01 14:37, Linux-Fan wrote:


>2x4T SSD for fast storage (VMs, OS)

I suggest identifying your workloads, how much CPU, memory, disk I/O, etc.,  
each requires, and then dividing them across your several computers.


Division across multiple machines... I am already doing this for data that  
exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 4x1T  
outsourced to the other machine).


Are the SSD's 2 TB or 4 TB?


I currently have:

* 1x Samsung SSD 850 EVO 2TB
* 1x Crucial_CT2050MX300SSD1

together in an mdadm RAID 1.

For the new server, I will need more storage, so I envied getting two
NVME U.2 SSDs for 2x4T -- mainly motivated by the fact that I would take  
the opportunity to upgrade performance and that they are not actually that  
expensive anymore:

https://www.conrad.de/de/p/intel-dc-p4510-4-tb-interne-u-2-pcie-nvme-ssd-6-35-cm-2-5-zoll-u-2-nvme-pcie-3-1-x4-ssdpe2kx040t801-1834315.html

Of course, given the fact that server manufacturers have entirely different  
views on prices (factor 7 in the Dell Webshop for instance :) ), I might  
need to change plans a little...


I currently do this for data I need rather rarely such that I can run the  
common tasks on a single machine. Doing this for all (or large amounts of  
data) will require running at least two machines at the same time which may  
increase the idle power draw and possibilities for failure?


More devices are going to use more power and have a higher probability of  
failure than a single device of the same size and type, but it's hard to  
predict for devices of different sizes and/or types.  I use HDD's for file  
server data and backups, and I use SSD's for system disks, caches, and/or  
fast local working storage.  I expect drives will break, so I have invested  
in redundancy and disaster planning/ preparedness.


Yes. It is close to the same here with the additional SSD usage for VMs  
and containers.


Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have similar  
prices and power consumption, but the former will run sequential tasks twice  
as fast and the latter will run concurrent tasks twice as fast.


Is this still true today? AFAIK all modern CPUs "boost" their frequency if  
they are lightly loaded. Also, the larger CPUs tend to come with more cache  
which may speed up single-core applications, too.


Yes, frequency scaling blurs the line.  But, the principle remains.


I am not familiar with AMD products, but Intel does offer Xeon processors  
with fewer cores and higher frequencies specifically for workstations:


https://www.intel.com/content/www/us/en/products/docs/processors/xeon/ultimate- 
workstation-performance.html


AMD does it too, but their variants are more targeted at saving license  
costs by reducing the number of cores. As I am mostly using free software, I  
can stick to the regular CPUs.


If I go for a workstation, I will end up with Intel anyways, because Dell,  
HP and Fujitsu seem to agree that Intels are the only true workstation CPUs.  

I would think that you should convert one of your existing machines into a  
file server.  Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's can  
work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should be  
impressive.  If you choose ZFS, it will need memory.  The rule of thumb is 5  
GB of memory per 1 TB of storage.  So, pick a machine that has at least 20  
GB of memory.


4x4T is surely nice and future-proof but currently above budget :)


Yes, $2,000+ for 4 @ 4 TB SATA III SSD's is a lot of money.  But, U.2  
PCIe/NVMe 4X drives are even more money.


Noted. Actually, 4x4T SATA is affordable, as is 2x4T U.2 if not bought from  
the server vendor [prices from HPE are still pending, but I am scared  
by browsing for them on the Internet already...] :)


[...]

down to their speed -- the current "fastest" system here has a Xeon E3-1231  
v3 and while it has 3.4GHz it is surely slower (even singlethreaded) than  
current 16-core server CPUs...


That would make a good file server; even better with 10 Gbps networking.


10GE is in place already, but there are other hardware limitations (see  
next).



Thinking of it, a possible distribution accross multiple machines may be

* (Existent) Storage server (1U, existent Fujitsu RX 1330 M1)
   [It does not do NVMe SSDs, though -- alternatively put the disks
    in the VM server?]
* (New) VM server (2U, lots of RAM)
* (New) Workstation (4U, GPU)

For interactive use and experimentation with VMs I would need to power-on  
all three systems. For non-VM use, it would have to be two... it is an  
interesting solution that stays within what the systems were designed to do  
but I think it is currently too much for my uses.


The Fujitsu might do PCIe/NVMe 4X M.2 or U.2 SSD's with the right adapter  
card.


Been there, failed at that:

The backplane is a SAS/SATA 

Problema instalación Debian VirtualBox

2020-10-02 Thread Juan Miguel Gross
Hola! Tengo una consulta si en este lugar me pueden contestar... Estoy
tratando de instalar Debian en  VirtualBox en Windows y al llegar a la
parte "Instalando man-db" o algo así, se queda colgado el instalador o me
tira un error, probé con varios ISO pero con todos es igual, que podría
hacer?  Desde ya, muchas gracias!


Re: General-Purpose Server for Debian Stable

2020-10-02 Thread David Christensen

On 2020-10-02 04:18, Linux-Fan wrote:

David Christensen writes:


On 2020-10-01 14:37, Linux-Fan wrote:


>2x4T SSD for fast storage (VMs, OS)

I suggest identifying your workloads, how much CPU, memory, disk I/O, 
etc., each requires, and then dividing them across your several 
computers.


Division across multiple machines... I am already doing this for data 
that exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 
4x1T outsourced to the other machine). 


Are the SSD's 2 TB or 4 TB?


I currently do this for data I 
need rather rarely such that I can run the common tasks on a single 
machine. Doing this for all (or large amounts of data) will require 
running at least two machines at the same time which may increase the 
idle power draw and possibilities for failure?


More devices are going to use more power and have a higher probability 
of failure than a single device of the same size and type, but it's hard 
to predict for devices of different sizes and/or types.  I use HDD's for 
file server data and backups, and I use SSD's for system disks, caches, 
and/or fast local working storage.  I expect drives will break, so I 
have invested in redundancy and disaster planning/ preparedness.



Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have 
similar prices and power consumption, but the former will run 
sequential tasks twice as fast and the latter will run concurrent 
tasks twice as fast.


Is this still true today? AFAIK all modern CPUs "boost" their frequency 
if they are lightly loaded. Also, the larger CPUs tend to come with more 
cache which may speed up single-core applications, too.


Yes, frequency scaling blurs the line.  But, the principle remains.


I am not familiar with AMD products, but Intel does offer Xeon 
processors with fewer cores and higher frequencies specifically for 
workstations:


https://www.intel.com/content/www/us/en/products/docs/processors/xeon/ultimate-workstation-performance.html


I would think that you should convert one of your existing machines 
into a file server.  Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB 
SSD's can work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection 
should be impressive.  If you choose ZFS, it will need memory.  The 
rule of thumb is 5 GB of memory per 1 TB of storage.  So, pick a 
machine that has at least 20 GB of memory.


4x4T is surely nice and future-proof but currently above budget :) 


Yes, $2,000+ for 4 @ 4 TB SATA III SSD's is a lot of money.  But, U.2 
PCIe/NVMe 4X drives are even more money.



That's why I use obsolete, but new, Seagate Constellation ES.2 SATA III 
3 TB HDD's -- ~$50 each on eBay.  Buy four drives for $200, buy small 
SATA III SSD cache and log devices for $100, and you will have 75% the 
capacity and excellent performance for typical file server workloads for 
$300.



I saw 
that the Supermicro AS-2113S-WTRT can do 6xU.2 drives. In case I chose 
Supermicro this would allow upgrading to such a 4x4T configuration.


As for the workstation, it is difficult to find a vendor that supports 
Debian.  But, there are vendors that support Ubuntu; which is based 
upon Debian.  So, you can run Ubuntu and you might be able to run Debian:


https://html.duckduckgo.com/html?q=ubuntu%20workstation


My experience with HP and Fujitsu Workstations is that they run well 
with Debian. I am still thinking that buying two systems will be more 
expensive and more power draw. Using one of the existent systems will 
slow some things down to their speed -- the current "fastest" system 
here has a Xeon E3-1231 v3 and while it has 3.4GHz it is surely slower 
(even singlethreaded) than current 16-core server CPUs...


That would make a good file server; even better with 10 Gbps networking.



Thinking of it, a possible distribution accross multiple machines may be

* (Existent) Storage server (1U, existent Fujitsu RX 1330 M1)
   [It does not do NVMe SSDs, though -- alternatively put the disks
    in the VM server?]
* (New) VM server (2U, lots of RAM)
* (New) Workstation (4U, GPU)

For interactive use and experimentation with VMs I would need to 
power-on all three systems. For non-VM use, it would have to be two... 
it is an interesting solution that stays within what the systems were 
designed to do but I think it is currently too much for my uses.


The Fujitsu might do PCIe/NVMe 4X M.2 or U.2 SSD's with the right 
adapter card.



Depending upon what your VM's are doing, a SATA III SSD might be enough 
or you might want something faster.  Similar comment for the workstation.



Rather than a new VM server and a new workstation, perhaps a new 
workstation with enough memory and fast local working storage would be 
adequate for both purposes.




Still, thanks for the suggestion.


YW.  :-)


David



Re: Mounting /dev/shm noexec

2020-10-02 Thread Andy Smith
Hello,

On Fri, Oct 02, 2020 at 10:35:51PM +0300, Valter Jaakkola wrote:
> So where can I change the mounting parameters of /dev/shm, or otherwise 
> arrange
> it so that /dev/shm is noexec already at/after boot?
> 
> (Out of curiosity, where is /dev/shm mounted from?)

I think from systemd:


https://github.com/systemd/systemd/blob/c7828862b39883cf1f55235a937d29588d5a806b/src/core/mount-setup.c#L79

and I think if you wish to alter the mount options you should put it
in /etc/fstab and then systemd will do the equivalent of:

# mount -oremount /dev/shm

to get your options set, though there would be a small window where
it had the default options.

Though note that it seems systemd once did use "noexec" for /dev/shm
but stopped 10 years ago because it broke some uses of mmap:


https://github.com/systemd/systemd/commit/501c875bffaef3263ad42c32485c7fde41027175

On SysV init systems I think this is part of the initscripts
package.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Mounting /dev/shm noexec

2020-10-02 Thread Sven Joachim
On 2020-10-02 22:35 +0300, Valter Jaakkola wrote:

> I an effort to increase security one of the things I'm trying to do is to have
> no world-writable directories where anything (well, binaries at least) could 
> be
> executed from. I use Debian Linux 10 amd64. (I'm a home user.)
>
> When I run `sudo find / -type d -perm -2` and remove from the listing the
> directories which are on noexec-mounted partitions, just /dev/shm and
> /dev/mqueue are left (and some docker directories in /var/lib/docker/overlay2,
> to which I can't write as a normal user).

There are a few other directories where users can typically write to
and execute binaries, though: /tmp, /var/tmp, $HOME, /run/user/$USER.

> The problem for me is mounting /dev/shm noexec -- I can't find where to do 
> it. I
> couldn't find a lot of information about this on the internet. The few sources
> mostly only suggest adding it to fstab, but I'm hesitant about this as it 
> isn't
> there already. I'd rather change the settings at the source, where it's 
> mounted
> in the first place.
>
> I also ran `grep -rwlsI -e shm` through /etc and /usr/share but didn't find
> anything that would've looked like the mounting of /dev/shm, or where 
> parameters
> for it could have been changed.
>
> So where can I change the mounting parameters of /dev/shm, or otherwise 
> arrange
> it so that /dev/shm is noexec already at/after boot?

In /etc/fstab. :-)

> (Out of curiosity, where is /dev/shm mounted from?)

It's mounted by systemd, the list of core systems it mounts is hardcoded
in the source[1].  Filesystems that appear in /etc/fstab are remounted
with the options given there (for the gory details see
systemd-fstab-generator(8) and systemd.mount(5)).

Cheers,
   Sven


1. 
https://sources.debian.org/src/systemd/241-7~deb10u4/src/core/mount-setup.c/#L61



Re: Mounting /dev/shm noexec

2020-10-02 Thread deloptes
Valter Jaakkola wrote:

> So where can I change the mounting parameters of /dev/shm, or otherwise
> arrange it so that /dev/shm is noexec already at/after boot?
> 
> (Out of curiosity, where is /dev/shm mounted from?)

perhaps you are looking for tmpfs settings
AT least here it is mounted as tmpfs and this is done by udev AFAIK

try
$ grep -r tmpfs /etc/




Re: can't boot to a graphical interface.

2020-10-02 Thread deloptes
Frank McCormick wrote:

> While compiling an application today my Debian bullseye system somehow
> got messed up.

next time compile in chroot and as a dedicated user (not root)



Mounting /dev/shm noexec

2020-10-02 Thread Valter Jaakkola
Hi,

I an effort to increase security one of the things I'm trying to do is to have
no world-writable directories where anything (well, binaries at least) could be
executed from. I use Debian Linux 10 amd64. (I'm a home user.)

When I run `sudo find / -type d -perm -2` and remove from the listing the
directories which are on noexec-mounted partitions, just /dev/shm and
/dev/mqueue are left (and some docker directories in /var/lib/docker/overlay2,
to which I can't write as a normal user).

I assume that /dev/mqueue being exec-mounted doesn't have the same risks as
/dev/shm, as mqueue is not(?) an ordinary filesystem where one could save files
and execute them, right? (Or so it appears to me after some experimentation and
reading.)

The problem for me is mounting /dev/shm noexec -- I can't find where to do it. I
couldn't find a lot of information about this on the internet. The few sources
mostly only suggest adding it to fstab, but I'm hesitant about this as it isn't
there already. I'd rather change the settings at the source, where it's mounted
in the first place.

I also ran `grep -rwlsI -e shm` through /etc and /usr/share but didn't find
anything that would've looked like the mounting of /dev/shm, or where parameters
for it could have been changed.

So where can I change the mounting parameters of /dev/shm, or otherwise arrange
it so that /dev/shm is noexec already at/after boot?

(Out of curiosity, where is /dev/shm mounted from?)

(Additional suggestions regarding security are most welcome, too.)

Kind regards,
Valter Jaakkola




Re: "ps -o %mem" and free memory in Linux

2020-10-02 Thread Fabrice BAUZAC-STEHLY


Victor Sudakov writes:

> I summed up with awk the values of %mem, which are supposed to be "ratio
> of the process's resident set size  to the physical memory", correct?
>
> In my understanding, the value of %mem should indicate how much physical
> memory is spent on the "individual" part of the process, otherwise the
> parameter is either useless or misdocumented.

No, in resident memory you can find memory private to a process as well
as memory shared between processes.

And in memory private to a process, you can find resident memory and
non-resident memory.

According to Linux's Documentation/filesystems/proc.txt, we can find
this information in /proc/PID/status:

  VmData   size of private data segments
e.g.  VmData:  123004 kB

This looks like the amount of private memory, but I'm not sure.

--
Fabrice BAUZAC-STEHLY
PGP 015AE9B25DCB0511D200A75DE5674DEA514C891D



Re: SRT-tools version upgrade

2020-10-02 Thread john doe

On 10/2/2020 9:13 PM, Greg Wooledge wrote:

On Fri, Oct 02, 2020 at 11:47:42AM -0700, Alan Latteri wrote:

Hello,

I request that set-tools be upgraded to the latest version, 1.4.2.


File a wishlist bug against the package.



Or chip in and update the package yourself! :)


--
John Doe



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Linux-Fan

Linux-Fan writes:


Hello fellow list users,

I am constantly needing more computation power, RAM and HDD storage such
that I have finally decided to buy a server for my next "workstation". The


[...]


* Of course, if there are any other comments, I am happy to hear them, too.

  I am looking into all options although a fully self-built system is
  probably too much. I once tried to (only) get a decent PC case and failed
  at it... I can only imagine it being worse for rackmount PC cases and
  creating a complete system composed of individual parts?


Hello everyone,

I just wanted to thank everyone for the great replies they sent!
I now know some additional things to consider and even got some progress on  
the unrelated e-mail signatures problem.


Still unsure where I will end up with this, but if interested, I could post  
the actual results from my journey once I got the hardware. It will take some  
time, for sure, but probably happen before next year :)


Thanks again
Linux-Fan

--
── ö§ö ── 8 bit for signature ── ö§ö ──


pgpCLjfKBBeGi.pgp
Description: PGP signature


Re: SRT-tools version upgrade

2020-10-02 Thread Greg Wooledge
On Fri, Oct 02, 2020 at 11:47:42AM -0700, Alan Latteri wrote:
> Hello,
> 
> I request that set-tools be upgraded to the latest version, 1.4.2.

File a wishlist bug against the package.



SRT-tools version upgrade

2020-10-02 Thread Alan Latteri
Hello,

I request that set-tools be upgraded to the latest version, 1.4.2.

Thanks,
Alan



Re: [HS] recherche boite mail

2020-10-02 Thread Basile Starynkevitch



On 10/2/20 7:46 PM, david wrote:
Oui niveau sûreté c'est top, mais je suppose qu'il faut avoir une 
connexion internet assez fiable ?
Je suis à la campagne, free en dégroupé partiel, donc j'utilise le 
réseau orange (à ce qu'on m'a dit)



Moi je paie quelques euros par mois ma boite mail (et mon domaine DNS) 
chez Gandi https://www.gandi.net/


Des proches sont contents du service mail vendu (quelques euros par 
mois) par https://protonmail.com/


Je rappelle l'adage anglo saxon "there is no such thing as a free lunch" 
qu'on peut traduire "quand c'est gratuit, c'est vous le produit"


Librement

--
Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France; 
(mobile phone: cf my web page / voir ma page web...)



Re: [OT] Acceso ssh

2020-10-02 Thread User
On Fri, 02 Oct 2020 09:20:19 +0200, Fran Blanco wrote:

> Alguna regla en el host.allow o en el host.deny?
> 
> Puedes acceder desde la Red local? Para descartar problema de
> enrutamiento o nateo en los routers que comentas.
> 
> Entiendo que obviamente has visto que el SSH está en escucha y en que
> puerto.
> 
> Saludos

Ya identifique el problema; el router de entrada filtra ssh por el 22 y 
no puedo cambiarlo; no tengo reglas en host.xxx.

Intentare cambiar el puerto de entrada al 80 en el servidor.

gracias.






Re: [HS] recherche boite mail

2020-10-02 Thread david
Oui niveau sûreté c'est top, mais je suppose qu'il faut avoir une 
connexion internet assez fiable ?
Je suis à la campagne, free en dégroupé partiel, donc j'utilise le 
réseau orange (à ce qu'on m'a dit)


Je suis à 300m du caisson de répartition, mais j'ai tout de même des 
débits très variables en upload et en download, ma box passe plusieurs 
fois pas semaine en mode usine et je subis très régulièrement des 
déconnexions (de la part d'orange il paraît...)
je fais partie de ces personnes qui aimeraient bien avoir de la 3g et de 
la 4g et pourquoi pas la fibre, avant la 5g :(
Ce qui me rassure, c'est que c'est encore pire chez mes voisins (sfr, 
orange entre autres)


Donc je pense que cette solution n'est pas adaptée pour moi
Mais je te remercie de me l'avoir proposée

David


Le 2020-08-31 21:41, Petrusko a écrit :

Un Raspberry Pi (ou équivalent) avec son alim, une carte micro-sd et un
cable micro-usb ;)
Certes un petit investissement au départ, mais avec 2W de conso en
crête, ca ne coute pas grand chose en électricité 24/7 ;-)
Une sauvegarde complète de temps en temps dans son ordi perso...
Pourquoi pas Yunohost ( yunohost.org ) pour avoir un petit serveur mail
qui fonctionne de suite sur base Raspbian/Debian.

Et pour le coup, c'est auto-hébergé, donc vie privée et tout le
tointoin... c'est le top ;)


Le 24/08/2020 à 18:16, Sébastien Dinot a écrit :
Dernière solution, que j'utilise de manière secondaire, auto-héberger 
son propre serveur de mail (mais là, il y a aussi un investissement 
financier et humain, sans doute supérieur au final à ce que coute une 
adresse chez protonmail ou autre prestataire respectueux de notre vie 
privée et de nos données).




Re: [OT] Acceso ssh

2020-10-02 Thread User
On Fri, 02 Oct 2020 05:31:47 +, Andrés DG wrote:

> Nmap tendría que devolverte los puertos que se encuentrar abiertos.
> Salvo que este bloqueado el protocolo ICMP (en ese caso creo que las
> opciones PS y PA funcionaban para obtener info de los hosts conectados a
> la red nmap -PS ip_a_consultar  o   nmap -PA ip_a_consultar).
> En mi caso me pasaba con el router de la oficina, que de buenas a
> primera no me dejaba acceder de forma remota (a través de una IP
> pública) al servidor por ssh. Tenía que reiniciarlo para poder acceder
> de forma remota (desde la red local nunca tenía problemas). Terminé
> solicitando una IP pública estática y de ahí en más, hasta ahora, no
> tuve problemas. Aun así, en mi caso, creo que debe haber sido alguna
> falla del router de la oficina.
> Perdón que no pueda ayudarte más.

Me ayudastes, gracias.

Hice nmap a la Lan, y resulta que es el router el que esta filtrando el 
ssh. Probare a entrar via puerto 80!

Gracias.



Re: Pantalla negra en OBS Studio

2020-10-02 Thread remgasis remgasis
Sorry por el top posting

On Fri, Oct 2, 2020, 2:38 PM remgasis remgasis  wrote:

> Puedes buscar una versión de Camtasia +o- vieja, con su serial
>
> On Fri, Oct 2, 2020, 9:38 AM Fabian Dos Santos 
> wrote:
>
>> Hola,
>> Necesito ayuda para grabar la pantalla de la laptop. Al utilizar
>> cualquier programa: vokoscreen, OBS Studio y otros, se graba el sonido pero
>> la pantalla permanece completamente negra. Ocurre lo mismo cuando quiero
>> grabar la pantalla con alguna aplicación como Zoom y otras.
>> Agradecería mucho su respuesta.
>> Saludos!
>> FDS
>>
>> --
>>
>>
>> *Fabián E. Dos SantosInstructor Asociado en Psicología Holokinética de
>> la Academia Internacional de Ciencias (RSM)- MéxicoCoordinador de
>> Actividades Académicas e Investigación - Academia Internacional de
>> Psicología Holokinética*
>>
>>
>> *1-Cursos, Libros, Audios, Vídeos, Reuniones, Congresos y Noticias:*
>>
>> *www.percepcionunitaria.org *
>> *2-Tome el Curso en Psicología Holokinética por Internet (CIPH):*
>>
>> *www.psicologiaholokinetica.org 
>> **3-Libros:
>> "Holokinesis Libros" *
>>
>> *www.holokinesislibros.com *
>>
>


Re: Pantalla negra en OBS Studio

2020-10-02 Thread remgasis remgasis
Puedes buscar una versión de Camtasia +o- vieja, con su serial

On Fri, Oct 2, 2020, 9:38 AM Fabian Dos Santos  wrote:

> Hola,
> Necesito ayuda para grabar la pantalla de la laptop. Al utilizar cualquier
> programa: vokoscreen, OBS Studio y otros, se graba el sonido pero la
> pantalla permanece completamente negra. Ocurre lo mismo cuando quiero
> grabar la pantalla con alguna aplicación como Zoom y otras.
> Agradecería mucho su respuesta.
> Saludos!
> FDS
>
> --
>
>
> *Fabián E. Dos SantosInstructor Asociado en Psicología Holokinética de
> la Academia Internacional de Ciencias (RSM)- MéxicoCoordinador de
> Actividades Académicas e Investigación - Academia Internacional de
> Psicología Holokinética*
>
>
> *1-Cursos, Libros, Audios, Vídeos, Reuniones, Congresos y Noticias:*
>
> *www.percepcionunitaria.org *
> *2-Tome el Curso en Psicología Holokinética por Internet (CIPH):*
>
> *www.psicologiaholokinetica.org 
> **3-Libros:
> "Holokinesis Libros" *
>
> *www.holokinesislibros.com *
>


can't boot to a graphical interface.

2020-10-02 Thread Frank McCormick
While compiling an application today my Debian bullseye system somehow 
got messed up. It will boot to a CLI but no X, apparently because for 
some reason the system is unable to access some files in 
/usr/share/dbus-1. It keeps saying access denied. The directories and 
files are owned by root, and if I noot to a CLI I have no trouble 
accessing them using sudo and midnight commander.


I tried reinstalling systemd but it ended with the same problem.

Can anyone help?

Thanks


--
Frank McCormick



GIMP keeps crashing!

2020-10-02 Thread Rebecca Matthews
```
GNU Image Manipulation Program version 2.10.8
git-describe: GIMP_2_10_6-294-ga967e8d2c2
C compiler:
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/8/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 8.2.0-13'
--with-bugurl=file:///usr/share/doc/gcc-8/README.Bugs
--enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr
--with-gcc-major-version-only --program-suffix=-8
--program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix
--libdir=/usr/lib --enable-nls --enable-clocale=gnu
--enable-libstdcxx-debug --enable-libstdcxx-time=yes
--with-default-libstdcxx-abi=new --enable-gnu-unique-object
--disable-vtable-verify --enable-libmpx --enable-plugin
--enable-default-pie --with-system-zlib --with-target-system-zlib
--enable-objc-gc=auto --enable-multiarch --disable-werror
--with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32
--enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none
--without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu
--host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 8.2.0 (Debian 8.2.0-13)


Re: Signing emails, was Re: General-Purpose Server for Debian Stable

2020-10-02 Thread David Wright
On Fri 02 Oct 2020 at 13:18:29 (+0200), Linux-Fan wrote:–
> 
> OT: The hints about the details of e-mail encoding and signing are
> appreciated. Some other notes are here:
> https://sourceforge.net/p/courier/mailman/courier-cone/?viewmonth=202010

I took a look at that thread.
> From: Linux-Fan  - 2020-10-02 11:30:16
> > I discovered that the workaround is exactly to use some 8-bit
> > characters which will avoid the re- encoding throughout transmission.

Exactly what I would suggest, and the opposite of my advice in
https://lists.debian.org/debian-user/2020/06/msg00598.html
where the problem was reversed. So you could hit all your replies by
modifying your attribution (as I have, above), but better would be
the hyphen in your sign–off (← as here) particularly if it's automated,
like mine. (I don't make my sign-off into a syntactical signature.)
I don't know how entirely 7bit attachments would be treated.

> From: Sam Varshavchik  - 2020-10-02 11:21:44
> > Cone already uses quoted-printable when the message contains 8-bit 
> > characters.

I'd use base64, myself, with a signed message.

> > There is no valid reason whatsoever to reencode 7-bit only mail
> > content. I cannot find any documentation that specifies any
> > restrictions on signed mail, other than to avoid 8-bit content.

Note that you have been using Content-Type: … charset="UTF-8"
RFC 3156 says "many existing mail gateways will detect if the next hop
does not support MIME or 8-bit data and perform conversion to either
Quoted-Printable or Base64".

> > Trying to work around someone else's bugs is a major waste of
> > time. The correct solution is for someone else to fix the bug.

… which, of course, is nonsense. Since mid-August, I have been using a
different email smarthost for posts to just this list because two MTAs
are currently unable to cooperate successfully. Should I stay silent
until that bug is fixed? (Don't answer that!)

They obviously haven't read the RFC:

   "Implementor's note: It cannot be stressed enough that applications
using this standard follow MIME's suggestion that you "be
conservative in what you generate, and liberal in what you
accept."  In this particular case it means it would be wise for an
implementation to accept messages with any content-transfer-
encoding, but restrict generation to the 7-bit format required by
this memo.  This will allow future compatibility in the event the
Internet SMTP framework becomes 8-bit friendly."

So my guess is that your mailer is sending *potentially*
8bit content without encoding it, and an MTA is encoding it
because it's not expected to check for solely 7bit content
just because Content-Transfer-Encoding is set to 7bit.

Sorry that I can't check whether your signing is successful
as I don't maintain any personal keyring.

Cheers,
David.



Re: LaTeX et LaTeXila

2020-10-02 Thread f6k
Bonjour,

On Thu, Oct 01, 2020 at 07:54:57PM +0200, F. Dubois wrote:

> Voici la commande qui pose souci :
> 
> \newcommand{\encadrecouleur}[2]{\psframebox[fillstyle=solid,fillcolor=#1,framearc=0.15]{\begin{minipage}{\columnwidth-25\fboxsep}#2\end{minipage}}}

Je vois que les réponses ne se bousculent pas (encore !), donc je me
permets.  Pour de l'aide autour de LaTeX, je te conseille vivement
d'aller faire un tour sur le groupe de nouvelle fr.comp.text.tex.  Il
est nécessaire effectivement d'avoir un accès quelconque à usenet, mais
c'est là que je trouve les meilleures discussions et l'aide la plus
substantielle autour de LaTeX.  En espérant que, si tu n'obtiens pas
l'aide souhaitée par ici, tu puisses trouver ton chemin vers ce groupe.

-f6k

-- 
~{,_,"> indignus LabRat - ftp://shl.huld.re



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Stefan Monnier
> If it's quiet you want, try https://silentpc.com/. They are not cheap,
> but their products are solid and reliable, and quiet. The two I have
> are so quiet that I can hear the heads move on the 3.5" disk drives in
> them.

Sadly, they get noisier when you use SSDs instead: you can't hear the
heads move any more ;-(


Stefan



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Charles Curley
On Thu, 01 Oct 2020 23:37:16 +0200
Linux-Fan  wrote:

> Hello fellow list users,
> 
> I am constantly needing more computation power, RAM and HDD storage
> such that I have finally decided to buy a server for my next
> "workstation".

If it's quiet you want, try https://silentpc.com/. They are not cheap,
but their products are solid and reliable, and quiet. The two I have
are so quiet that I can hear the heads move on the 3.5" disk drives in
them.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/


pgpluWmVb3oV0.pgp
Description: OpenPGP digital signature


Pantalla negra en OBS Studio

2020-10-02 Thread Fabian Dos Santos
Hola,
Necesito ayuda para grabar la pantalla de la laptop. Al utilizar cualquier
programa: vokoscreen, OBS Studio y otros, se graba el sonido pero la
pantalla permanece completamente negra. Ocurre lo mismo cuando quiero
grabar la pantalla con alguna aplicación como Zoom y otras.
Agradecería mucho su respuesta.
Saludos!
FDS

-- 


*Fabián E. Dos SantosInstructor Asociado en Psicología Holokinética de
la Academia Internacional de Ciencias (RSM)- MéxicoCoordinador de
Actividades Académicas e Investigación - Academia Internacional de
Psicología Holokinética*


*1-Cursos, Libros, Audios, Vídeos, Reuniones, Congresos y Noticias:*

*www.percepcionunitaria.org *
*2-Tome el Curso en Psicología Holokinética por Internet (CIPH):*

*www.psicologiaholokinetica.org
**3-Libros:
"Holokinesis Libros" *

*www.holokinesislibros.com *


Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Linux-Fan

David Christensen writes:


On 2020-10-01 14:37, Linux-Fan wrote:


[...]


Typical workloads:
data compression (Debian live build, xz),
virtual machines (software installation, updates)

Rarely:
GPGPU (e.g. nVidia CUDA, but some experimentation with OpenCL, too)
single-core load coupled with very high RAM use (cbmc)


[...]

I suggest identifying your workloads, how much CPU, memory, disk I/O, etc.,  
each requires, and then dividing them across your several computers.


Division across multiple machines... I am already doing this for data that  
exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 4x1T  
outsourced to the other machine). I currently do this for data I need rather  
rarely such that I can run the common tasks on a single machine. Doing this  
for all (or large amounts of data) will require running at least two  
machines at the same time which may increase the idle power draw and  
possibilities for failure?


Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have similar  
prices and power consumption, but the former will run sequential tasks twice  
as fast and the latter will run concurrent tasks twice as fast.


Is this still true today? AFAIK all modern CPUs "boost" their frequency if  
they are lightly loaded. Also, the larger CPUs tend to come with more cache  
which may speed up single-core applications, too.


I would think that you should convert one of your existing machines into a  
file server.  Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's can  
work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should be  
impressive.  If you choose ZFS, it will need memory.  The rule of thumb is 5  
GB of memory per 1 TB of storage.  So, pick a machine that has at least 20  
GB of memory.


4x4T is surely nice and future-proof but currently above budget :) I saw  
that the Supermicro AS-2113S-WTRT can do 6xU.2 drives. In case I chose  
Supermicro this would allow upgrading to such a 4x4T configuration.


As for the workstation, it is difficult to find a vendor that supports  
Debian.  But, there are vendors that support Ubuntu; which is based upon  
Debian.  So, you can run Ubuntu and you might be able to run Debian:


https://html.duckduckgo.com/html?q=ubuntu%20workstation


My experience with HP and Fujitsu Workstations is that they run well with  
Debian. I am still thinking that buying two systems will be more expensive  
and more power draw. Using one of the existent systems will slow some things  
down to their speed -- the current "fastest" system here has a Xeon E3-1231  
v3 and while it has 3.4GHz it is surely slower (even singlethreaded) than  
current 16-core server CPUs...


Thinking of it, a possible distribution accross multiple machines may be

* (Existent) Storage server (1U, existent Fujitsu RX 1330 M1)
  [It does not do NVMe SSDs, though -- alternatively put the disks
   in the VM server?]
* (New) VM server (2U, lots of RAM)
* (New) Workstation (4U, GPU)

For interactive use and experimentation with VMs I would need to power-on  
all three systems. For non-VM use, it would have to be two... it is an  
interesting solution that stays within what the systems were designed to do  
but I think it is currently too much for my uses.


Still, thanks for the suggestion.

OT: The hints about the details of e-mail encoding and signing are  
appreciated. Some other notes are here:

https://sourceforge.net/p/courier/mailman/courier-cone/?viewmonth=202010

Linux-Fan

Non-ASCII chars follow...:  ö § ö ─
*E-Mail signed for experimentation*


pgp_NWXyAEf2T.pgp
Description: PGP signature


Re: General-Purpose Server for Debian Stable

2020-10-02 Thread Dan Ritter
Linux-Fan wrote: 
> Dan Ritter writes:
> 
> > You should also look at machines made by SuperMicro and resold
> > via a number of VARs. My company is currently using Silicon
> > Mechanics and is reasonably happy with them. We have a few HPs
> > as well.


I forgot to mention: though I wouldn't characterize their
support as extensive, Silicon Mechanics will happily install
several distributions, including Debian Stable. Their online
build-and-price system is quite well done.

-dsr-



Re: Firefox over JACK in Debian Testing

2020-10-02 Thread riveravaldez
On 10/1/20, Olivier Humbert  wrote:
>> On 6/6/2020 11:25 PM, riveravaldez wrote:
>
>> AFAIK Firefox lacks JACK support (in the sense that you can start
>> JACK and then Firefox and then, automatically, all I/O audio-ports
>> Firefox generated, appear as available JACK connections, let's say)
>
> Join in if you like to do so
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=844688 .
>
> Cheers,
> Olivier

Thanks, Olivier.
That seems like a 'requested feature', but I'm failing to see what the
issue is (there's a repetition of "please do", but not an elaboration
on the matter).
I guess it would be useful to know at least what's the reason why
official firefox-esr package is not being compiled with JACK support
enabled.
Thanks again,
cheers.



Re: [OT] Acceso ssh

2020-10-02 Thread Fran Blanco
Alguna regla en el host.allow o en el host.deny?

Puedes acceder desde la Red local? Para descartar problema de enrutamiento
o nateo en los routers que comentas.

Entiendo que obviamente has visto que el SSH está en escucha y en que
puerto.

Saludos

El vie., 2 oct. 2020 9:04, Andrés DG  escribió:

> Nmap tendría que devolverte los puertos que se encuentrar abiertos. Salvo
> que este bloqueado el protocolo ICMP (en ese caso creo que las opciones PS
> y PA funcionaban para obtener info de los hosts conectados a la red nmap
> -PS ip_a_consultar  o   nmap -PA ip_a_consultar).
> En mi caso me pasaba con el router de la oficina, que de buenas a primera
> no me dejaba acceder de forma remota (a través de una IP pública) al
> servidor por ssh. Tenía que reiniciarlo para poder acceder de forma remota
> (desde la red local nunca tenía problemas). Terminé solicitando una IP
> pública estática y de ahí en más, hasta ahora, no tuve problemas. Aun así,
> en mi caso, creo que debe haber sido alguna falla del router de la oficina.
> Perdón que no pueda ayudarte más.
> --
> *De:* user 
> *Enviado:* jueves, 1 de octubre de 2020 21:07
> *Para:* debian-user-spanish@lists.debian.org <
> debian-user-spanish@lists.debian.org>
> *Asunto:* Re: [OT] Acceso ssh
>
>
> On Thu, 01 Oct 2020 20:07:27 +, User wrote:
>
> > Hola:
> >
> > Tengo un servidor que no puedo acceder por IP publica!
> >
> > Anteriormente, habian 2 lineas separadas, y podia conectarme sin
> > problema,
> > via IP publica y/o privada; ahora solo hay 1 linea, con 2 ruteadores, y
> > no puedo acceder por ninguno de los ruteadores, usando la IP publica; si
> > puedo acceder via IP privada!
> >
> > Nmap, no me dice nada; y no aparece el puerto ssh.
> >
> > Alguien tiene idea de cual seria la situacion, por favor?
> >
> > Muchas gracias por su atencion.
>
> Segun veo, el mensaje no esta claro:
> - No tengo acceso a los ruteadoes.
> _ NO es asunto de firewall.
> _ Uso Buster, al dia.
> _ NO tiene nada que ver si la IP, esfija o variable.
>
> Gracias, por las respuestas; y espero que ahora, este mas claro.
>
>


Re: General-Purpose Server for Debian Stable

2020-10-02 Thread David Christensen

On 2020-10-01 14:37, Linux-Fan wrote:

Hello fellow list users,

I am constantly needing more computation power, RAM and HDD storage such
that I have finally decided to buy a server for my next "workstation". The
reasoning is that my experience with "real" servers is that they are most
reliable, very helpful in indicating errors (dedicated LEDs next to the 
PCIe

slots for instance) and modern servers' noise seems to be acceptable for my
working envirnoment (?)

I currently use a Fujitsu RX 1330 M1 (1U server, very silent) and it 
clearly

is not "enough" in terms of RAM and HDD capacity. A little more graphics
processing power than a low-profile GPU would be nice, too :)

Rack-Mountability is a must, although I am open to putting another tower in
there, sideways, should that be advantageous.

In terms of "performance" specifications, I am thinking of the following:

* 1x16-core CPU (e.g. AMD EPYC 7302)

* 64 GiB RAM (e.g. 2x32 GiB or 4x16 GiB)

   I plan to extend this to 128 GiB as soon as the need arises.
   As I am exceeding the 4T mark, I am increasingly considering the
   use of ZFS.

   Currently, I have the maximum of 32 GiB installed in the
   RX 1330 M1 and while it is often enough, there are times where I
   am using 40 GiB SSD swap to overcome the limits.

* 2x2T HDD for slow storage (local Debian Mirror, working data),
   2x4T SSD for fast storage (VMs, OS)
   I will do software-RAID1 (ZFS or mdadm is still undecided).
   I possible, I would like to use the power of the modern NVMe PCIe
   U.2 (U.3?) SSDs, because they really seem to be much faster and that
   may speed-up the parallel use of VMs and be more future-proof.

* 1-2x 10G N-BaseT Ethernet for connecting to other machines to share
   virtual machine storage (I am doing this already and it works...)

* a 150W GPU if possible (75W full-sized card would be OK, too).

Typical workloads:
data compression (Debian live build, xz),
virtual machines (software installation, updates)

Rarely:
GPGPU (e.g. nVidia CUDA, but some experimentation with OpenCL, too)
single-core load coupled with very high RAM use (cbmc)

Some time ago, there was this thread
https://lists.debian.org/debian-user/2020/06/msg01117.html
It already gave me some ideas...

I am considering one the following models which are AMD EPYC based (I think
AMDs provide good performance for my types of use).

* HPE DL385 G10 Plus
* Dell PowerEdge R7515

I have an old HP DL380 G4 in the rack and while it is incredibly loud, 
it is

also very reliable. Of course, it is rarely online for its excessive
loudness and power draw, but I derive that HPE is going be reliable? Before
the Fujitsu, I used a HP Z400 workstation and before that a HP Compaq d530
CMT and all of these still "function", despite being too slow for today's
loads.

I am also taking into consideration these, although they are Intel-based 
and

I find it a lot harder to obtain information on prices, compatibility etc.
for these manufacturers:

* Fujitsu PRIMERGY RX2540 M5
* Oracle X8-2L
   (seems to be too loud for my taste. especially compared to the others?)

I have already learned from my local vendor that HPE does not support the
use of non-HPE HDDs in the server which means I would need to buy all my
drives directly from HPE (of course this will be very expensive).
Additionally, none of the server manufacturers list Debian compatibility,
thus my questions are as follows:

* Does anybody run Debian stable (10) on any of these servers?
   Does it work well?

* Is there any experience with "unsupported" HDD configurations i.e.
   disks not bought from the server manufacturer?

   I would think that during the warranty period (3y) I best stay with
   the manufacturer-provided HDDs but after that, it would be nice to be
   able to add some more "cheap" storage...

* Of course, if there are any other comments, I am happy to hear them, too.

   I am looking into all options although a fully self-built system is
   probably too much. I once tried to (only) get a decent PC case and 
failed

   at it... I can only imagine it being worse for rackmount PC cases and
   creating a complete system composed of individual parts?

Thanks in advance
Linux-Fan



I suggest identifying your workloads, how much CPU, memory, disk I/O, 
etc., each requires, and then dividing them across your several computers.



Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have 
similar prices and power consumption, but the former will run sequential 
tasks twice as fast and the latter will run concurrent tasks twice as fast.



I would think that you should convert one of your existing machines into 
a file server.  Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's 
can work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should 
be impressive.  If you choose ZFS, it will need memory.  The rule of 
thumb is 5 GB of memory per 1 TB of storage.  So, pick a machine that 
has at least 20 GB of memory.



As for the workstation, it 

RE: [OT] Acceso ssh

2020-10-02 Thread Andrés DG
Nmap tendría que devolverte los puertos que se encuentrar abiertos. Salvo que 
este bloqueado el protocolo ICMP (en ese caso creo que las opciones PS y PA 
funcionaban para obtener info de los hosts conectados a la red nmap -PS 
ip_a_consultar  o   nmap -PA ip_a_consultar).
En mi caso me pasaba con el router de la oficina, que de buenas a primera no me 
dejaba acceder de forma remota (a través de una IP pública) al servidor por 
ssh. Tenía que reiniciarlo para poder acceder de forma remota (desde la red 
local nunca tenía problemas). Terminé solicitando una IP pública estática y de 
ahí en más, hasta ahora, no tuve problemas. Aun así, en mi caso, creo que debe 
haber sido alguna falla del router de la oficina.
Perdón que no pueda ayudarte más.

De: user 
Enviado: jueves, 1 de octubre de 2020 21:07
Para: debian-user-spanish@lists.debian.org 

Asunto: Re: [OT] Acceso ssh


On Thu, 01 Oct 2020 20:07:27 +, User wrote:

> Hola:
>
> Tengo un servidor que no puedo acceder por IP publica!
>
> Anteriormente, habian 2 lineas separadas, y podia conectarme sin
> problema,
> via IP publica y/o privada; ahora solo hay 1 linea, con 2 ruteadores, y
> no puedo acceder por ninguno de los ruteadores, usando la IP publica; si
> puedo acceder via IP privada!
>
> Nmap, no me dice nada; y no aparece el puerto ssh.
>
> Alguien tiene idea de cual seria la situacion, por favor?
>
> Muchas gracias por su atencion.

Segun veo, el mensaje no esta claro:
- No tengo acceso a los ruteadoes.
_ NO es asunto de firewall.
_ Uso Buster, al dia.
_ NO tiene nada que ver si la IP, esfija o variable.

Gracias, por las respuestas; y espero que ahora, este mas claro.



Re: General-Purpose Server for Debian Stable

2020-10-02 Thread deloptes
Linux-Fan wrote:

> I am constantly needing more computation power, RAM and HDD storage such
> that I have finally decided to buy a server for my next "workstation". The
> reasoning is that my experience with "real" servers is that they are most
> reliable, very helpful in indicating errors (dedicated LEDs next to the
> PCIe slots for instance) and modern servers' noise seems to be acceptable
> for my working envirnoment (?)
> 

they are loud - for me it is unacceptable to have it in some living space

> I currently use a Fujitsu RX 1330 M1 (1U server, very silent) and it
> clearly is not "enough" in terms of RAM and HDD capacity. A little more
> graphics processing power than a low-profile GPU would be nice, too :)
> 

Servers usually do not have a good GPUs - they are mostly not used.

Why don't you split your setup into powerful workstation and your current
server?

> Rack-Mountability is a must, although I am open to putting another tower
> in there, sideways, should that be advantageous.
> 
> In terms of "performance" specifications, I am thinking of the following:
> 
> * 1x16-core CPU (e.g. AMD EPYC 7302)
> 
> * 64 GiB RAM (e.g. 2x32 GiB or 4x16 GiB)
> 
> I plan to extend this to 128 GiB as soon as the need arises.
> As I am exceeding the 4T mark, I am increasingly considering the
> use of ZFS.
> 
> Currently, I have the maximum of 32 GiB installed in the
> RX 1330 M1 and while it is often enough, there are times where I
> am using 40 GiB SSD swap to overcome the limits.
> 
> * 2x2T HDD for slow storage (local Debian Mirror, working data),
> 2x4T SSD for fast storage (VMs, OS)
> I will do software-RAID1 (ZFS or mdadm is still undecided).
> I possible, I would like to use the power of the modern NVMe PCIe
> U.2 (U.3?) SSDs, because they really seem to be much faster and that
> may speed-up the parallel use of VMs and be more future-proof.
> 
> * 1-2x 10G N-BaseT Ethernet for connecting to other machines to share
> virtual machine storage (I am doing this already and it works...)
> 
> * a 150W GPU if possible (75W full-sized card would be OK, too).
> 
> Typical workloads:
> data compression (Debian live build, xz),
> virtual machines (software installation, updates)
>

you could off load your virtual machines to dedicated hardware. Depends on
the number and configuration you indeed may need  a server for that -
better look at the requirements of the VM software manufacturer (VMWare or
Oracle or whatever else)
 
> Rarely:
> GPGPU (e.g. nVidia CUDA, but some experimentation with OpenCL, too)
> single-core load coupled with very high RAM use (cbmc)
> 
> Some time ago, there was this thread
> https://lists.debian.org/debian-user/2020/06/msg01117.html
> It already gave me some ideas...
> 
> I am considering one the following models which are AMD EPYC based (I
> think AMDs provide good performance for my types of use).
> 
> * HPE DL385 G10 Plus
> * Dell PowerEdge R7515
> 
> I have an old HP DL380 G4 in the rack and while it is incredibly loud, it
> is also very reliable. Of course, it is rarely online for its excessive
> loudness and power draw, but I derive that HPE is going be reliable?
> Before the Fujitsu, I used a HP Z400 workstation and before that a HP
> Compaq d530 CMT and all of these still "function", despite being too slow
> for today's loads.
> 

Servers are loud - same for DL380 G10, same for the PowerEdge

> I am also taking into consideration these, although they are Intel-based
> and I find it a lot harder to obtain information on prices, compatibility
> etc. for these manufacturers:
> 
> * Fujitsu PRIMERGY RX2540 M5
> * Oracle X8-2L
> (seems to be too loud for my taste. especially compared to the others?)
> 
> I have already learned from my local vendor that HPE does not support the
> use of non-HPE HDDs in the server which means I would need to buy all my
> drives directly from HPE (of course this will be very expensive).
> Additionally, none of the server manufacturers list Debian compatibility,
> thus my questions are as follows:
> 
> * Does anybody run Debian stable (10) on any of these servers?
> Does it work well?
> 
> * Is there any experience with "unsupported" HDD configurations i.e.
> disks not bought from the server manufacturer?
> 

HPE does not support means not that you can not do it - rather you will not
have warranty/support in case of troubles. If you can live with it?

> I would think that during the warranty period (3y) I best stay with
> the manufacturer-provided HDDs but after that, it would be nice to be
> able to add some more "cheap" storage...
> 

They use Seagate or Samsung disks - I am not sure ATM. Disks are disks you
can put whatever you want there. The disks they sell seem to be
manufactured specifically for HPE and likely are tested/certified by HPE,
although here one disk failed after 1,5y of operations in a conditioned
server room where the machine did not have much disk load. I mena it is
good to stay for the 3y with their hardware.

> * Of course, if there