Re: [PVE-User] osd init authentication failed: (1) Operation not permitted

2020-06-29 Thread Alwin Antreich
On Mon, Jun 29, 2020 at 11:23:31AM +, Naumann, Thomas wrote: > Hi Alwin, > > yes, all OSDs, which did not start, were on same physical clusternode > and all running VMs on cluster were dead because of missing objects. > > Problem was that those OSDs did not have an entry in "ceph auth list",

Re: [PVE-User] osd init authentication failed: (1) Operation not permitted

2020-06-29 Thread Alwin Antreich
Hello Thomas, On Fri, Jun 26, 2020 at 07:51:57AM +, Naumann, Thomas wrote: > Hi, > > in our production cluster (proxmox 5.4, ceph 12.2) there is an issue > since yesterday. after an increase of a pool 5 OSDs do not start, > status is "down/in", ceph health: HEALTH_WARN nodown,noout flag(s)

Re: [PVE-User] PVE 6, wireless and regulatory database...

2020-05-27 Thread Alwin Antreich
On Tue, May 26, 2020 at 05:31:46PM +0200, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > > root@ino:~# dpkg -l | grep wireless-regdb > > > ii wireless-regdb 2016.06.10-1 > > >

Re: [PVE-User] PVE 6, wireless and regulatory database...

2020-05-26 Thread Alwin Antreich
On Tue, May 26, 2020 at 12:44:30PM +0200, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > It is not an issue with the package. I forgot about the alternatives in > > Debian (thanks Thomas). Once you set the alternative (tool: &g

Re: [PVE-User] PVE 6, wireless and regulatory database...

2020-05-22 Thread Alwin Antreich
On Wed, May 20, 2020 at 10:58:16PM +0200, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > Debian uses a different file name for the signature file then ubuntu. > > A-HA! > > > > You can always download the latest

Re: [PVE-User] PVE 6, wireless and regulatory database...

2020-05-19 Thread Alwin Antreich
On Tue, May 19, 2020 at 09:40:11AM +0200, Marco Gaiarin wrote: > Mandi! Martin Maurer > In chel di` si favelave... > > > use the buster-backports - > > https://packages.debian.org/buster-backports/wireless-regdb > > (see also https://backports.debian.org/) > > Seems is not sufficient: > >

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-04-22 Thread Alwin Antreich
On Wed, Apr 22, 2020 at 12:43:58PM +0200, Rainer Krienke wrote: > hello, > > there is no single workload, but a bunch of VMs the do a lot of > different things many of which do not special performance > demands. The VMs that do need speed are NFS Fileservers and SMBservers. > > And exactly these

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-04-21 Thread Alwin Antreich
On Tue, Apr 21, 2020 at 03:34:47PM +0200, Rainer Krienke wrote: > Hello, > > just wanted to thank you for your help and to tell you that I found the > culprit that made my read-perfomance look rarther small on a proxmox VM > with a LV based on 4 disks (rdbs). The best result using bonnie++ as a >

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-04-15 Thread Alwin Antreich
On Tue, Apr 14, 2020 at 08:15:15PM +0200, Rainer Krienke wrote: > Am 14.04.20 um 18:09 schrieb Alwin Antreich: > > >> > >> In a VM I also tried to read its own striped LV device: dd > >> if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing &g

Re: [PVE-User] Create secondary pool on ceph servers..

2020-04-14 Thread Alwin Antreich
On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote: > Hi there > > I have 7 servers with PVE 6 all updated... > All servers has named pve1,pve2 and so on... > On pve3, pve4 and pve5 has SSD HD of 960GB. > So we decided to create a second pool that will use only this SSD. > I have

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-04-14 Thread Alwin Antreich
On Tue, Apr 14, 2020 at 05:21:44PM +0200, Rainer Krienke wrote: > Am 14.04.20 um 16:42 schrieb Alwin Antreich: > >> According to these numbers the relation from write and read performance > >> should be the other way round: writes should be slower than reads, but >

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-04-14 Thread Alwin Antreich
On Tue, Apr 14, 2020 at 03:54:30PM +0200, Rainer Krienke wrote: > Hello, > > in between I learned a lot from this group (thanks a lot) to solve many > performance problems I initially faced with proxmox in VMs having their > storage on CEPH rbds. > > I parallelized access to many disks on a vm

Re: [PVE-User] Need some advice on pve writeback caching

2020-04-03 Thread Alwin Antreich
On Fri, Apr 03, 2020 at 12:54:36PM +0200, Rainer Krienke wrote: > Hello Alwin, > > thanks for you very much answer. > > Regarding LVM: > I initially thought the cache size is 25MB for each RBD device. So a LVM > based on in my case 4 RBD devices could loose 100MB. This would have > been a

Re: [PVE-User] Need some advice on pve writeback caching

2020-04-03 Thread Alwin Antreich
Hello Rainer, On Fri, Apr 03, 2020 at 10:00:58AM +0200, Rainer Krienke wrote: > Hello, > > I played around with rbd caching by activating "Writeback" mode in > proxmox6. This really helps for write performance so I would like to use > it but the documenattion says that a possible danger is a

Re: [PVE-User] Spillover issue

2020-03-25 Thread Alwin Antreich
On Wed, Mar 25, 2020 at 12:27:56PM +0100, Eneko Lacunza wrote: > Hi Alwin, > > El 25/3/20 a las 11:55, Alwin Antreich escribió: > > > > > > > > The easiest way ist to destroy and re-create the OSD with a bigger > > > > > > DB/WAL. The

Re: [PVE-User] Spillover issue

2020-03-25 Thread Alwin Antreich
On Wed, Mar 25, 2020 at 08:43:41AM +0100, Eneko Lacunza wrote: > Hi Alwin, > > El 24/3/20 a las 14:54, Alwin Antreich escribió: > > On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: > > > Hi Allwin, > > > > > > El 24/3/20 a las 12:24, Alwin

Re: [PVE-User] Spillover issue

2020-03-24 Thread Alwin Antreich
On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: > Hi Allwin, > > El 24/3/20 a las 12:24, Alwin Antreich escribió: > > On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: > > > We're seeing a spillover issue with Ceph, using 14.2.8: > [...]

Re: [PVE-User] Spillover issue

2020-03-24 Thread Alwin Antreich
Hello Eneko, On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: > Hi all, > > We're seeing a spillover issue with Ceph, using 14.2.8: > > We originally had 1GB rocks.db partition: > > 1. ceph health detail >HEALTH_WARN BlueFS spillover detected on 3 OSD >BLUEFS_SPILLOVER

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alwin Antreich
On Tue, Mar 17, 2020 at 05:07:47PM +0100, Rainer Krienke wrote: > Hello Alwin, > > thank you for your reply. > > The test VMs config is this one. It only has the system disk as well a > disk I added for my test writing on the device with dd: > > agent: 1 > bootdisk: scsi0 > cores: 2 > cpu:

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alwin Antreich
Hallo Rainer, On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote: > Hello, > > I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb > Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic > disks. The pool with rbd images for disk storage is erasure

Re: [PVE-User] Better understanding CEPH Pool definition

2020-03-11 Thread Alwin Antreich
Hello Gregor, On Wed, Mar 11, 2020 at 10:57:28AM +0100, Gregor Burck wrote: > Hi, > > I've still problems to understand the pooling definition Size/min in ceph and > what it means to us. > > We've a 3 node cluster with 4 SSDs (the smallest sinfull setup in the > documention). :) > > When I

Re: [PVE-User] lzo files conundrum

2020-03-11 Thread Alwin Antreich
Hello Renato, On Wed, Mar 11, 2020 at 07:35:21AM +0100, Renato Gallo via pve-user wrote: > Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) > From: Renato Gallo > To: pve-user@pve.proxmox.com > Cc: g noto > Subject: lzo files conundrum > X-Mailer: Zimbra 8.8.15_GA_3829 (ZimbraWebClient - FF68 >

Re: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen

2020-03-09 Thread Alwin Antreich
On Mon, Mar 09, 2020 at 02:35:05PM +0100, Gregor Burck wrote: > Moin, > > > Wie genau den? Down -> Out -> Destroy, über die GUI? > Yep über die GUI Am besten das 'cleanup disks' angeklickt lassen. Damit wird die Partitionstabelle und die ersten 200 MB entfernt. Oder per CLI mit '--cleanup'. > >

Re: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen

2020-03-09 Thread Alwin Antreich
Hallo Gregor, On Mon, Mar 09, 2020 at 01:07:20PM +0100, Gregor Burck wrote: > Moin, > > ich teste verschiedenes mit CEPH. > Dabei habe ich eine SSD über Destroy entfernt. Wie genau den? Down -> Out -> Destroy, über die GUI? > > Wie kann ich die SSD wieder in den Cluster aufnehmen? Beim

Re: [PVE-User] How to restart ceph-mon?

2020-02-21 Thread Alwin Antreich
On Fri, Feb 21, 2020 at 03:29:08PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > Yes, that looks strange. But as said before, it is deprecated to use > > IDs. Best destroy and re-create the MON one-by-one. The default command

Re: [PVE-User] How to restart ceph-mon?

2020-02-20 Thread Alwin Antreich
On Thu, Feb 20, 2020 at 03:14:01PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > > it is time to kill it? > > I suppose you did that already. Did it work? > > No, i've done just now. But yes, a 'kill' worked. Monitor rest

Re: [PVE-User] How to restart ceph-mon?

2020-02-20 Thread Alwin Antreich
On Wed, Feb 19, 2020 at 12:05:44PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > What does the status of the service show? > > systemctl status ceph-mon@3.service > > Uh, never minded about that, damn me! > > root@th

Re: [PVE-User] How to restart ceph-mon?

2020-02-19 Thread Alwin Antreich
Hello Marco, On Wed, Feb 19, 2020 at 11:39:06AM +0100, Marco Gaiarin wrote: > > I've upgraded ceph, PVE5, minor upgrade from 12.2.12 to 12.2.13. > > OSD nodes get rebooted, but i have also two nodes that are only > monitors, and host some VM/LXC so i've tried to simply restart > ceph-mon. But

Re: [PVE-User] Misleading documentation for qm importdisk

2020-02-05 Thread Alwin Antreich
Hello Simone, On Wed, Feb 05, 2020 at 11:20:56AM +0100, Simone Piccardi via pve-user wrote: > Date: Wed, 5 Feb 2020 11:20:56 +0100 > From: Simone Piccardi > To: PVE User List > Subject: Misleading documentation for qm importdisk > User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0)

Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Alwin Antreich
Hello Fabrizio, On Thu, Jan 30, 2020 at 12:46:16PM +0100, Fabrizio Cuseo wrote: > > I have installed a new cluster with the last release, with a local ceph > storage. > I also have 2 old and smaller clusters, and I need to migrate all the VMs to > the new cluster. > The best method i have used

Re: [PVE-User] External Ceph cluster for PVE6.1-5

2020-01-29 Thread Alwin Antreich
On Wed, Jan 29, 2020 at 01:45:51PM +0100, Mark Schouten wrote: > On Wed, Jan 29, 2020 at 09:23:53AM +0100, Alwin Antreich wrote: > > > We just upgraded one of our clusters to PVE 6.1-5. It's not > > > hyperconverged, so Ceph is running on an external cluster. That cluster

Re: [PVE-User] External Ceph cluster for PVE6.1-5

2020-01-29 Thread Alwin Antreich
Hi Mark, On Wed, Jan 29, 2020 at 03:32:10AM +0100, Mark Schouten wrote: > > Hi, > > We just upgraded one of our clusters to PVE 6.1-5. It's not hyperconverged, > so Ceph is running on an external cluster. That cluster runs Luminous, and we > installed the Nautilus client on the

Re: [PVE-User] PVE 5.4: cannot move disk image to Ceph

2019-09-06 Thread Alwin Antreich
On Fri, Sep 06, 2019 at 11:44:10AM +0200, Uwe Sauter wrote: > root@px-bravo-cluster:~# rbd -p vdisks create vm-112-disk-0 --size 1G > rbd: create error: (17) File exists > 2019-09-06 11:35:20.943998 7faf704660c0 -1 librbd: rbd image vm-112-disk-0 > already exists > > root@px-bravo-cluster:~# rbd

Re: [PVE-User] PVE 5.4: cannot move disk image to Ceph

2019-09-06 Thread Alwin Antreich
Hello Uwe, On Fri, Sep 06, 2019 at 10:41:18AM +0200, Uwe Sauter wrote: > Hi, > > I'm having trouble moving a disk image to Ceph. Moving between local disks > and NFS share is working. > > The error given is: > > > create full clone of drive scsi0

Re: [PVE-User] Reinstall Proxmox with Ceph storage

2019-08-06 Thread Alwin Antreich via pve-user
; >Em ter, 6 de ago de 2019 às 06:48, Alwin Antreich > escreveu: >> >> Hello Gilberto, >> >> On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote: >> > Hi there... >> > >> > Today we have 3 servers work on Cluster HA and Ceph. >>

Re: [PVE-User] Reinstall Proxmox with Ceph storage

2019-08-06 Thread Alwin Antreich
Hello Gilberto, On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote: > Hi there... > > Today we have 3 servers work on Cluster HA and Ceph. > Proxmox all nodes is 5.4 > We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH storage. > So, we like to reinstall each node in

Re: [PVE-User] adding ceph osd nodes

2019-07-18 Thread Alwin Antreich
On Thu, Jul 18, 2019 at 01:43:32PM +0200, mj wrote: > Hi, > > On 7/17/19 2:47 PM, Alwin Antreich wrote: > > > I like to add, though not explicitly asked. While it is technically > > possible, the cluster will lose its enterprise support. As Ceph is under > > s

Re: [PVE-User] adding ceph osd nodes

2019-07-17 Thread Alwin Antreich
On Wed, Jul 17, 2019 at 12:47:32PM +0200, mj wrote: > Hi, > > We are running a three-node licensed hyper-converged proxmox cluster with > ceph storage. > > Question: is it possible to add some extra ceph OSD storage nodes, without > proxmox virtualisation, and thus without the need to purchase

Re: [PVE-User] Ceph bluestore OSD Journal/DB disk size

2019-05-29 Thread Alwin Antreich
Hi Eneko, On Wed, May 29, 2019 at 10:30:33AM +0200, Eneko Lacunza wrote: > Hi all, > > I have noticed that our office Proxmox cluster has a Bluestore OSD with a > very small db partition. This OSD was created from GUI on 12th march this > year: > > This node has 4 OSDs: > - osd.12: bluestore,

Re: [PVE-User] ceph rebalance/ raw vs pool usage

2019-05-08 Thread Alwin Antreich
On Wed, May 08, 2019 at 09:34:44AM +0100, Mark Adams wrote: > Thanks for getting back to me Alwin. See my response below. > > > I have the same size and count in each node, but I have had a disk failure > (has been replaced) and also had issues with osds dropping when that memory > allocation

Re: [PVE-User] ceph rebalance/ raw vs pool usage

2019-05-08 Thread Alwin Antreich
Hello Mark, On Tue, May 07, 2019 at 11:26:17PM +0100, Mark Adams wrote: > Hi All, > > I would appreciate a little pointer or clarification on this. > > My "ceph" vm pool is showing 84.80% used. But the %RAW usage is only 71.88% > used. is this normal? there is nothing else on this ceph cluster

Re: [PVE-User] Proxmox 5.2, CEPH 12.2.12: still CephFS looks like jewel

2019-05-06 Thread Alwin Antreich
Hi Igor, On Sun, May 05, 2019 at 12:39:06AM +0700, Igor Podlesny wrote: > root@pve-40:~# ceph osd set-require-min-compat-client luminous > set require_min_compat_client to luminous > > After enabling CephFS on a single node: > > root@pve-40:~# ceph osd set-require-min-compat-client luminous >

Re: [PVE-User] rbd lock list -- how to track down to client?

2019-04-30 Thread Alwin Antreich
On Tue, Apr 30, 2019 at 03:12:58PM +0700, Igor Podlesny wrote: > On Tue, 30 Apr 2019 at 15:02, Alwin Antreich wrote: > [...] > > > > $ rbd lock list cassandra > > > > > > > > There is 1 exclusive lock on this image. > > > > Locker I

Re: [PVE-User] rbd lock list -- how to track down to client?

2019-04-30 Thread Alwin Antreich
Hello Igor, On Tue, Apr 30, 2019 at 02:30:19PM +0700, Igor Podlesny wrote: > In most cases I've found people were willing just to remove the lock. > But as to me it's better try to find if there's no legitimate use > before doing that. > > So, as an example (at >

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Alwin Antreich
On Fri, Mar 22, 2019 at 10:40:17AM +0100, Eneko Lacunza wrote: > Hi, > > El 22/3/19 a las 9:59, Alwin Antreich escribió: > > On Fri, Mar 22, 2019 at 09:03:22AM +0100, Eneko Lacunza wrote: > > > El 22/3/19 a las 8:35, Alwin Antreich escribió: > > > > On Thu, Mar

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Alwin Antreich
On Fri, Mar 22, 2019 at 09:03:22AM +0100, Eneko Lacunza wrote: > Hi Alwin, > > El 22/3/19 a las 8:35, Alwin Antreich escribió: > > On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote: > > > We have removed an OSD disk from a server in our office cluster, r

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Alwin Antreich
On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote: > Hi all, > > We have removed an OSD disk from a server in our office cluster, removing > partitions (with --cleanup 1) and that has made the server unable to boot > (we have seen this in 2 servers in a row...) > > Looking at the

Re: [PVE-User] Overwhelming Migration to EC2

2019-02-18 Thread Alwin Antreich
Hello John, On Mon, Feb 18, 2019 at 12:23:42PM -0800, John C. Reid wrote: > I have been using ProxMox to host our VMs for a couple of years now and I > really like it. Unfortunately the last couple weeks have been an eye opener. > We had a fire earlier this month and last week a storm caused

Re: [PVE-User] lots of 'heartbeat_check: no reply from ...' in the logs

2019-02-08 Thread Alwin Antreich
On Fri, Feb 08, 2019 at 09:07:09AM +0100, mj wrote: > Hi Alwin, > > Thanks for your reply! Appreciated. > > > These messages are not necessarily caused by a network issue. It might > > well be that the daemon osd.18 can not react to heartbeat messages. > > The thing is: the two OSDs are on the

Re: [PVE-User] lots of 'heartbeat_check: no reply from ...' in the logs

2019-02-07 Thread Alwin Antreich
Hello Mj, On Thu, Feb 07, 2019 at 08:15:52PM +0100, mj wrote: > Hi, > > We are getting continuous lines like in our logs, between osd.19 and osd.18, > both are on the same host pm2: > > > 2019-02-07T19:59:24.724447+01:00 pm2 ceph-osd 3093 - - 2019-02-07 > > 19:59:24.723800 7f902e9f0700 -1

Re: [PVE-User] Proxmox Ceph high memory usage

2019-01-16 Thread Alwin Antreich
Hello Gilberto, On Wed, Jan 16, 2019 at 10:11:06AM -0200, Gilberto Nunes wrote: > Hi there > > Anybody else experiment hight memory usage in Proxmox CEPH Storage Server? > I have a 6 node PVE CEPH and after upgrade, I have noticed this high memory > usage... > All server has 16GB of ram. I know

Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-17 Thread Alwin Antreich
Hello Eneko, On Mon, Dec 17, 2018 at 09:23:36AM +0100, Eneko Lacunza wrote: > Hi, > > El 16/12/18 a las 17:16, Frank Thommen escribió: > > > > I understand that with the new PVE release PVE hosts > > > > (hypervisors) can be > > > > used as Ceph servers.  But it's not clear to me if (or when)

Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-16 Thread Alwin Antreich
On Sun, Dec 16, 2018 at 05:16:50PM +0100, Frank Thommen wrote: > Hi Alwin, > > On 16/12/18 15:39, Alwin Antreich wrote: > > Hello Frank, > > > > On Sun, Dec 16, 2018 at 02:28:19PM +0100, Frank Thommen wrote: > > > Hi, > > > > > > I understan

Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-16 Thread Alwin Antreich
Hello Frank, On Sun, Dec 16, 2018 at 02:28:19PM +0100, Frank Thommen wrote: > Hi, > > I understand that with the new PVE release PVE hosts (hypervisors) can be > used as Ceph servers. But it's not clear to me if (or when) that makes > sense. Do I really want to have Ceph MDS/OSD on the same

Re: [PVE-User] Proxmox Ceph workload issue....

2018-12-10 Thread Alwin Antreich
Hello Gilberto, On Fri, Dec 07, 2018 at 03:34:42PM -0200, Gilberto Nunes wrote: > Hi there > > I have a 6 nodes Ceph cluster make with Proxmox. > In order to recude the workload rebalance, I activate some features, like > this: > > ceph tell osd.* injectargs '--osd-max-backfills 1' > ceph tell

Re: [PVE-User] Proxmox VE 5.3 released!

2018-12-04 Thread Alwin Antreich
Hi Lindsay, On Tue, Dec 04, 2018 at 11:59:41PM +1000, Lindsay Mathieson wrote: > One server has upgraded clean so far, but the 2nd one wants to remove pve :( > > apt-get dist-upgrade > The following packages were automatically installed and are no longer > required: >   apparmor ceph-fuse criu

Re: [PVE-User] Request for backport of Ceph bugfix from 12.2.9

2018-11-08 Thread Alwin Antreich
Hello Uwe, On Wed, Nov 07, 2018 at 09:01:09PM +0100, Uwe Sauter wrote: > Hi, > > I'm trying to manually migrate VM images with snapshots from pool "vms" to > pool "vdisks" but it fails: > > # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import > --export-format 2 -

Re: [PVE-User] NIC invertion after reboot

2018-11-06 Thread Alwin Antreich
Hi Gilberto, On Tue, Nov 06, 2018 at 12:03:00PM -0200, Gilberto Nunes wrote: > Hi there... > I am using this in /etc/default/grub: > net.ifnames=0 and biosdevname=0 > in order to use eth0, instead eno1, and so on... > Today the server was rebooted and after that occur a invertion of the > NIC...

Re: [PVE-User] NVMe

2018-10-29 Thread Alwin Antreich
Hi Grzegorz, On Mon, Oct 29, 2018 at 02:08:27PM +0100, lord_Niedzwiedz wrote: >         Hi, > I have a problem. > Im trying to install Proxmox on 4 NVMe drives. > One on the motherboard, two on the PCIe. > > Proxmox see everything at the installation. > I give the option zfs (RAIDZ-1). > > And

Re: [PVE-User] I lost the cluster communication in a 10 nodes cluster

2018-10-19 Thread Alwin Antreich
Hi, On Thu, Oct 18, 2018, 17:24 Denis Morejon wrote: > I lost the cluster communication again. > > I have been using Proxmox since version 1, and this is the first time It > bothers me so much! > > - All the 10 nodes have the same version > > (pve-manager/5.2-9/4b30e8f9 (running kernel:

Re: [PVE-User] Proxmox CEPH 6 servers failures!

2018-10-04 Thread Alwin Antreich
Hello Gilberto, On Thu, Oct 4, 2018, 22:05 Gilberto Nunes wrote: > Hi there > > I have something like this: > > CEPH01 | > |- CEPH04 > | > | > CEPH02 |-| > CEPH05 > |

Re: [PVE-User] Cant connect to ceph anymore

2018-09-13 Thread Alwin Antreich
Hi, On Thu, Sep 13, 2018, 23:59 wrote: > Hey Marcus, > > Thanks for your Message. > > Am 13. September 2018 22:58:04 MESZ schrieb Marcus Haarmann < > marcus.haarm...@midoco.de>: > >Hi, > > > >so you would drive a 12.2 (luminous) service with a 10.x (jewel) > >client. > > Sorry, with > > "The

Re: [PVE-User] Proxmox and DRBD

2018-08-19 Thread Alwin Antreich
On Sat, Aug 18, 2018, 15:09 Klaus Darilion wrote: > > >> Is it possible to activate and use the leftover DRBD code in Proxmox? > >> > >> If not, I think the simple "manual" solution would be a DRBD-backed LVM > >> storage in active-active mode. Any experiences with such a setup (except > >> that

Re: [PVE-User] Proxmox and DRBD

2018-08-17 Thread Alwin Antreich
Hello Klaus, On Fri, Aug 17, 2018 at 10:22:30PM +0200, Klaus Darilion wrote: > Hi! > > Reading the archives I learnt that Proxmox removed DRBD as the consequence > of license issues (which were reverted). As far as is know this was about > DRBD9 and older Proxmox releases had support for DRBD8.

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Alwin Antreich
Hi, On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote: > Hi Alwin, > > thanks for your advise. But no success. Still same error. > > mds-section: > > [mds.1] >     host = scvirt03 >     keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring [mds] keyring =

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-07 Thread Alwin Antreich
Hello Vadim, On Tue, Aug 7, 2018, 12:13 Vadim Bulst wrote: > Dear list, > > I'm trying to bring up a second mds with no luck. > > This is what my ceph.conf looks like: > > [global] > >auth client required = cephx >auth cluster required = cephx >auth service

Re: [PVE-User] Cluster doesn't recover automatically after blackout

2018-08-01 Thread Alwin Antreich
On Wed, Aug 01, 2018 at 01:40:34PM +0200, Eneko Lacunza wrote: > Hi Alwin, > > El 01/08/18 a las 12:56, Alwin Antreich escribió: > > On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote: > > > Hi all, > > > > > > This morning there was a quite

Re: [PVE-User] Cluster doesn't recover automatically after blackout

2018-08-01 Thread Alwin Antreich
Hi, On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote: > Hi all, > > This morning there was a quite long blackout which powered off a cluster of > 3 proxmox 5.1 servers. > > All 3 servers the same make and model, so they need the same amount of time > to boot. > > When the power

Re: [PVE-User] Poor CEPH performance? or normal?

2018-07-25 Thread Alwin Antreich
Hi, On Wed, Jul 25, 2018, 02:20 Mark Adams wrote: > Hi All, > > I have a proxmox 5.1 + ceph cluster of 3 nodes, each with 12 x WD 10TB GOLD > drives. Network is 10Gbps on X550-T2, separate network for the ceph > cluster. > Do a rados bench for testing the cluster performance, spinners are not

Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-05 Thread Alwin Antreich
On Thu, Jul 05, 2018 at 11:05:52AM +0100, Mark Adams wrote: > On 5 July 2018 at 11:04, Alwin Antreich wrote: > > > On Thu, Jul 05, 2018 at 10:26:34AM +0100, Mark Adams wrote: > > > Hi Anwin; > > > > > > Thanks for that - It's all working now! Just to conf

Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-05 Thread Alwin Antreich
On Thu, Jul 05, 2018 at 10:26:34AM +0100, Mark Adams wrote: > Hi Anwin; > > Thanks for that - It's all working now! Just to confirm though, shouldn't > the destroy button handle some of these actions? or is it left out on > purpose? > > Regards, > Mark > I am not sure, what you mean exactly but

Re: [PVE-User] VM remains in snap-delete state

2018-07-04 Thread Alwin Antreich
On Tue, Jul 3, 2018, 12:23 Mark Schouten wrote: > Hi, > > On Wed, 2018-06-27 at 15:09 +0200, Mark Schouten wrote: > > I have a VM that remains in snap-delete state. I'm wondering what the > > safest way is to proceed. I think I can do a qm unlock, and click > > remove again, but I'm not sure.

Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-03 Thread Alwin Antreich
On Tue, Jul 03, 2018 at 12:18:53PM +0100, Mark Adams wrote: > Hi Alwin, please see my response below. > > On 3 July 2018 at 10:07, Alwin Antreich wrote: > > > On Tue, Jul 03, 2018 at 01:05:51AM +0100, Mark Adams wrote: > > > Currently running the newest 5.2-1 version,

Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-03 Thread Alwin Antreich
On Tue, Jul 03, 2018 at 01:05:51AM +0100, Mark Adams wrote: > Currently running the newest 5.2-1 version, I had a test cluster which was > working fine. I since added more disks, first stopping, then setting out, > then destroying each osd so I could recreate it all from scratch. > > However,

Re: [PVE-User] high cpu load on 100mbits/sec download with virtio nic

2018-06-09 Thread Alwin Antreich
On Fri, Jun 08, 2018 at 07:39:17AM +, Maxime AUGER wrote: > Hello, > > Let me clarify my statement. > GUEST CPU load is acceptable (25% of a single CPU) > It is the cumulative load of the kvm process and the vhost thread that is > high, on the HOST side > kvm-thread-1=30% > kvm-thred-2=30% >

Re: [PVE-User] Custom storage in ProxMox 5

2018-03-30 Thread Alwin Antreich
Hi Lindsay, On Fri, Mar 30, 2018 at 03:05:11AM +, Lindsay Mathieson wrote: > Ceph has rather larger overheads, much bigger PITA to admin, does not perform > as well on whitebox hardware – in fact the Ceph crowd std reply to issues is > to spend big on enterprise hardware and is far less

Re: [PVE-User] pve-csync version of pve-zsync?

2018-03-13 Thread Alwin Antreich
be nice to have a tool like pve-zsync so I don't have to write some > script myself. Seems to me like something that would be desirable as part > of proxmox as well? That would basically implement the ceph rbd mirror feature. > > Cheers, > Mark > > On 12 March 2018 a

Re: [PVE-User] pve-csync version of pve-zsync?

2018-03-12 Thread Alwin Antreich
Hi Mark, On Mon, Mar 12, 2018 at 03:49:42PM +, Mark Adams wrote: > Hi All, > > Has anyone looked at or thought of making a version of pve-zsync for ceph? > > This would be great for DR scenarios... > > How easy do you think this would be to do? I imagine it wouId it be quite > similar to

Re: [PVE-User] Ghost Ceph node after upgrade to luminous/PVE 5.1

2017-11-15 Thread Alwin Antreich
Hi Eneko, On Wed, Nov 15, 2017 at 09:49:17AM +0100, Eneko Lacunza wrote: > Hi all, > > We have just upgraded our cluster from PVE 4.4/jewel to PVE 5.1/luminous . > > Overall experience was quite good; we found some problems with live > migration because although all VMs had a "default" display,

Re: [PVE-User] pveceph : Unable to add any OSD

2017-09-26 Thread Alwin Antreich
, it is a issue with rocksdb and happens on old hardware, like Opterons. So if your MONs are working fine and you have some space left on your ceph, then it should be no problem after package update to continue using the older hardware. > > Thanks > > > Le 25/09/2017 à 17:27, Alwin Ant

Re: [PVE-User] pveceph : Unable to add any OSD

2017-09-25 Thread Alwin Antreich
files ? PVE 5.1 release is planned for mid/end October, latest then, the packages are in the repository. The pvetest repository gets it earlier. https://forum.proxmox.com/threads/planning-proxmox-ve-5-1-ceph-luminous-kernel-4-13-latest-zfs-lxc-2-1.36943/ > > Thanks > > > Le 25/09/2017 à

Re: [PVE-User] PVE Cluster and /etc/hosts.conf

2017-09-19 Thread Alwin Antreich
Hi Gilberto, On Mon, Sep 18, 2017 at 05:34:47PM -0300, Gilberto Nunes wrote: > Hi guys... > > I always do, as good practices, adjust the /etc/hosts.conf, in order to > resolve the internal IP to the machine name, when creating a cluster. > So, I puted this in /etc/hosts.conf in each node: > >

Re: [PVE-User] pveceph : Unable to add any OSD

2017-09-18 Thread Alwin Antreich
On Sun, Sep 17, 2017 at 11:18:51AM +0200, Phil Schwarz wrote: > Hi, > going on on the same problem (links [1] & [2] ) > > [1] : https://pve.proxmox.com/pipermail/pve-user/2017-July/168578.html > [2] : https://pve.proxmox.com/pipermail/pve-user/2017-September/168775.html > > -Added a brand new

Re: [PVE-User] USB Devices hotplug

2017-08-11 Thread Alwin Antreich
nd still see pendig device > I try with qm set command as well, and same result > Come'on This feature already work in PVE 4.x. Why this recede??? Did it work with a different device? Does it work without hotplug? > > > > > 2017-08-11 5:21 GMT-03:00 Alwin An

Re: [PVE-User] USB Devices hotplug

2017-08-11 Thread Alwin Antreich
Hi Gilberto, > Gilberto Nunes hat am 10. August 2017 um 18:02 > geschrieben: > > > Hi friends > > I am using PVE 5 here, and when I try to add a external USB Device into > Windows 2012 VM, I see it in red, which means in append mode. > I just have to shutdown the

Re: [PVE-User] Ceph unterstanding questions

2017-03-18 Thread Alwin Antreich
Hi Daniel, On 03/18/2017 11:17 PM, Daniel wrote: > Hi there, > > i created a Cephcluster with 6 OSDs. Each OSD has 500GB > > In Proxmox I created a Pool with Size 2 Min 1. > Ceph df shows me I can use round about 1300GB Disk space. > Now I wanted to understand what Size and Min means. > > I think

Re: [PVE-User] looking for recommendations of VLAN setup

2017-02-07 Thread Alwin Antreich
Hi, On 02/06/2017 11:31 AM, Thomas Lamprecht wrote: > Hi, > >> >> But this setup is exactly what I'd want to avoid. Imagine you have a >> VM running on Node A that needs VLAN 7. With this kind of >> setup Proxmox could migrate the VM to Node B or C in case of failure >> of node A. But if the VM

Re: [PVE-User] OOM Killer problem

2017-02-04 Thread Alwin Antreich
Hi Michele, On 02/04/2017 10:44 AM, Michele Bonera wrote: > Hi. > > I have an issue with OOM Killer (Proxmox 4.4-5 - Kernel 4.4.35-1-pve) on > my infrastructure: even if there is a lot of free memory (15GB used over > 32GB available), OOM Killer is still killing my VM processes. Are you over

Re: [PVE-User] looking for recommendations of VLAN setup

2017-02-04 Thread Alwin Antreich
Hi Uwe, On 02/02/2017 10:22 AM, Uwe Sauter wrote: > Hi all, > > I would like to hear recommendations regarding the network setup of a Proxmox > cluster. The situation is the following: > > * Proxmox hosts have several ethernet links > * multiple VLANs are used in our datacenter > * I cannot

Re: [PVE-User] Share local storage with 2 or more LXC containers

2016-12-02 Thread Alwin Antreich
Hi Marcel, On 12/02/2016 12:02 PM, Marcel van Leeuwen wrote: > Hi, > > I have a problem at the moment and i’ve not yet figured out how to solve > this. > > Can I share local storage and make it accessible to 2 or more LXC containers? > Of course this can be done with remote network storage

Re: [PVE-User] Ceph: PANIC or DON'T PANIC? ;-)

2016-11-29 Thread Alwin Antreich
Hi Marco, On 11/29/2016 03:05 PM, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > >> What does the following command give you? >> ceph osd pool get min_size > > root@capitanamerica:~# ceph osd pool get DATA min_size > min_size: 1 >

Re: [PVE-User] Ceph: PANIC or DON'T PANIC? ;-)

2016-11-29 Thread Alwin Antreich
Hi Marco, On 11/29/2016 12:17 PM, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > >> May you please show us the logs? > > Ok, i'm here. With the log. > > A bit of legenda: 10.27.251.7 and 10.27.251.8 are the 'ceph' nodes > (mon+osd);

Re: [PVE-User] Ceph: PANIC or DON'T PANIC? ;-)

2016-11-28 Thread Alwin Antreich
Hi Marco, On 11/28/2016 03:31 PM, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > >> What did the full ceph status show? > > Do you mean 'ceph status'? I've not saved it, but was OK, as now: > > root@thor:~# ceph status > c

Re: [PVE-User] Ceph: PANIC or DON'T PANIC? ;-)

2016-11-28 Thread Alwin Antreich
Hi Marco, On 11/28/2016 01:05 PM, Marco Gaiarin wrote: > > A very strange saturday evening. Hardware tooling, hacking, caffeine, > ... > > I'm still completing my CEPH storage cluster (now 2 node storage, > waiting to add the third), but is it mostly ''on production''. > So, after playing with

Re: [PVE-User] Migrate dedicates host to LXC container

2016-11-08 Thread Alwin Antreich
Hi Daniel, On 11/07/2016 09:01 PM, Daniel wrote: > Hi there, > > i just tried to migrate a dedicated Host to a LXC container by simple rsync > with —numeric-ids to keet it as it is. > > The VM it selfs starts and can be accessed by Console via Proxmox but no > process is started. > Anyone has

Re: [PVE-User] Promox 4.3 cluster issue

2016-10-25 Thread Alwin Antreich
any > packet loss. > > On Tue, Oct 25, 2016 at 3:02 PM, Alwin Antreich <sysadmin-...@cognitec.com> > wrote: > >> Hi Szabolcs, >> >> On 10/25/2016 12:24 PM, Szabolcs F. wrote: >>> Hi Alwin, >>> >>> bond0 is on two Cisco 4948 switches and

Re: [PVE-User] Promox 4.3 cluster issue

2016-10-25 Thread Alwin Antreich
access.log for pveproxy, so this is the service > status): http://pastebin.com/gPPb4F3x I couldn't find anything unusual, but that doesn't mean there isn't. > > What other logs should I be reading? > > Thanks > > On Tue, Oct 25, 2016 at 11:23 AM, Alwin Antreich <sysadmin-..

Re: [PVE-User] Promox 4.3 cluster issue

2016-10-25 Thread Alwin Antreich
server showing? You know, syslog, dmesg, pveproxy, etc. ;-) > >> Another guess, are all servers synchronizing with a NTP server and have > the correct time? > Yes, NTP is working properly, the firewall lets all NTP request go through. > > > On Mon, Oct 24, 2016 at 5:19 PM,

Re: [PVE-User] Promox 4.3 cluster issue

2016-10-24 Thread Alwin Antreich
Hello Szabolcs, On 10/24/2016 03:16 PM, Szabolcs F. wrote: > Hello, > > I've got a Proxmox VE 4.3 cluster of 12 nodes. All of them are Dell C6220 > sleds. Each has 2x Intel Xeon E5-2670 CPU and 64GB RAM. I've got two > separate networks: 1Gbps LAN (Cisco 4948 switch) and 10Gbps storage (Cisco >

Re: [PVE-User] Ceph and Containers...

2016-10-14 Thread Alwin Antreich
Hi Marco, On 10/14/2016 06:31 PM, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > >> Did you copy the keyring to /etc/pve/priv/ceph/ and named it lxc.keyring? > > Yes. 'LXC' storage works perfectly. If i enable disk images on it, i > can

  1   2   >