Re: [PVE-User] LVM-thin from one server to other seems show wrong size

2020-07-03 Thread Gilberto Nunes
Hi!
Thanks a lot... I will try it and then report later...


---
Gilberto Nunes Ferreira






Em sex., 3 de jul. de 2020 às 09:43, Mark Schouten  escreveu:

> On Fri, Jul 03, 2020 at 09:19:19AM -0300, Gilberto Nunes wrote:
> > Any body?
>
> After migrating, the disk is no longer thin-provisioned. Enable discard,
> mount linux vm's with the discard option or run fstrim. On windows,
> check the optimize button in defrag.
>
> --
> Mark Schouten | Tuxis B.V.
> KvK: 74698818 | http://www.tuxis.nl/
> T: +31 318 200208 | i...@tuxis.nl
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] LVM-thin from one server to other seems show wrong size

2020-07-03 Thread Gilberto Nunes
Any body?
---
Gilberto Nunes Ferreira



Em qui., 2 de jul. de 2020 às 15:00, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi there
>
> I have two servers in cluster, but using lvm-thin (aka local-lvm).
> I have named the servers as pve01 and pve02.
> I have created a vm with 150GB of disk space in pve02.
> In pve02 lvs show me that LSize was 150GB and the Data% was show up about
> 15%.
> Bu then I have migrated the vm from pve02 to pve01 and now, when I list
> the lvm with lvs show me that:
> lvs | grep 154
> vm-154-disk-0pve   Vwi-aotz-- 140.00g data100.00
> Why Data% show 100?
> I don't get it!
> Thanks for any advice.
>
> ---
> Gilberto Nunes Ferreira
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] LVM-thin from one server to other seems show wrong size

2020-07-02 Thread Gilberto Nunes
Hi there

I have two servers in cluster, but using lvm-thin (aka local-lvm).
I have named the servers as pve01 and pve02.
I have created a vm with 150GB of disk space in pve02.
In pve02 lvs show me that LSize was 150GB and the Data% was show up about
15%.
Bu then I have migrated the vm from pve02 to pve01 and now, when I list the
lvm with lvs show me that:
lvs | grep 154
vm-154-disk-0pve   Vwi-aotz-- 140.00g data100.00
Why Data% show 100?
I don't get it!
Thanks for any advice.

---
Gilberto Nunes Ferreira
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Some VM's are not able to start

2020-05-14 Thread Gilberto Nunes
The preferred method to disable Transparent HugePages is to add
"transparent_hugepage=never" to the kernel boot line in the "/etc/grub.
conf" file. The server must be rebooted for this to take effect.


---
Gilberto Nunes Ferreira




Em qui., 14 de mai. de 2020 às 13:24, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> escreveu:

> Thank you so much.
>
> What is the steps to disable the hugepage ?
>
>
> Best regards
> SK
>
> On Thu, May 14, 2020 at 6:20 PM Mark Adams via pve-user <
> pve-user@pve.proxmox.com> wrote:
>
> >
> >
> >
> > -- Forwarded message --
> > From: Mark Adams 
> > To: PVE User List 
> > Cc:
> > Bcc:
> > Date: Thu, 14 May 2020 17:19:09 +0100
> > Subject: Re: [PVE-User] Some VM's are not able to start
> > Do you really need hugepages? if not disable it.
> >
> > On Thu, 14 May 2020 at 17:17, Sivakumar SARAVANAN <
> > sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> >
> > > Hello Daniel,
> > >
> > > Thanks for coming back.
> > >
> > > I mean, I am unable to power ON the VM until shutdown the other VM's in
> > the
> > > same host.
> > >
> > > There are 6 VM's running on each Host, But sometimes all 6 VM's would
> run
> > > without any issue. But Sometimes if I stop ( Shutdown) and Power ON (
> > > Start) getting an error saying as below. But each will have 32 GB
> memory.
> > >
> > > start failed: hugepage allocation failed at
> > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541.
> > >
> > > Appreciating your suggestion.
> > >
> > >
> > >
> > >
> > > Best regards
> > > SK
> > >
> > > On Thu, May 14, 2020 at 5:46 PM Daniel Berteaud <
> > > dan...@firewall-services.com> wrote:
> > >
> > > >
> > > >
> > > > - Le 14 Mai 20, à 17:38, Sivakumar SARAVANAN
> > > > sivakumar.saravanan.jv@valeo-siemens.com a écrit :
> > > >
> > > > > Hello,
> > > > >
> > > > > We have implemented the Proxmox VE in our environment.
> > > > >
> > > > > So each server will have a maximum 6 VM. But not able to start the
> > few
> > > > VM's
> > > > > ON until we bring down the 1 or 2 VM's in the same Host.
> > > > >
> > > >
> > > > Please describe what you mean by "not able to start"
> > > >
> > > > Cheers,
> > > > Daniel
> > > >
> > > > --
> > > > [ https://www.firewall-services.com/ ]
> > > > Daniel Berteaud
> > > > FIREWALL-SERVICES SAS, La sécurité des réseaux
> > > > Société de Services en Logiciels Libres
> > > > Tél : +33.5 56 64 15 32
> > > > Matrix: @dani:fws.fr
> > > > [ https://www.firewall-services.com/ |
> > https://www.firewall-services.com
> > > ]
> > > >
> > > > ___
> > > > pve-user mailing list
> > > > pve-user@pve.proxmox.com
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >
> > >
> > > --
> > > *This e-mail message is intended for the internal use of the intended
> > > recipient(s) only.
> > > The information contained herein is
> > > confidential/privileged. Its disclosure or reproduction is strictly
> > > prohibited.
> > > If you are not the intended recipient, please inform the sender
> > > immediately, do not disclose it internally or to third parties and
> > destroy
> > > it.
> > >
> > > In the course of our business relationship and for business purposes
> > > only, Valeo may need to process some of your personal data.
> > > For more
> > > information, please refer to the Valeo Data Protection Statement and
> > > Privacy notice available on Valeo.com
> > > <https://www.valeo.com/en/ethics-and-compliance/#principes>*
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> >
> >
> > -- Forwarded message --
> > From: Mark Adams via pve-user 
> > To: PVE User List 
> > Cc: Mark Adams 
> > 

Re: [PVE-User] Not able to start the VM

2020-04-23 Thread Gilberto Nunes
Using Windows, use WinSCP in order to open a ssh session to the Proxmox
server
Using Linux, use sftp line interface command, then cd /etc/pve/qemu-server,
then get VMID.conf, to save to your computer.


---
Gilberto Nunes Ferreira


Em qui., 23 de abr. de 2020 às 16:48, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> escreveu:

> Hello,
>
> Could you let me know the steps to get the file please ?
>
>
> Mit freundlichen Grüßen / Best regards / Cordialement,
>
> Sivakumar SARAVANAN
>
> Externer Dienstleister für / External service provider for
> Valeo Siemens eAutomotive Germany GmbH
> Research & Development
> R & D SWENG TE 1 INFTE
> Frauenauracher Straße 85
> 91056 Erlangen, Germany
> Tel.: +49 9131 9892 
> Mobile: +49 176 7698 5441
> sivakumar.saravanan.jv@valeo-siemens.com
> valeo-siemens.com
>
> Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger
> Schwab, Michael
> Axmann; Chairman of the Supervisory Board: Hartmut Klötzer; Registered
> office: Erlangen, Germany; Commercial registry: Fürth, HRB 15655
>
>
> On Thu, Apr 23, 2020 at 9:42 PM Gilberto Nunes  >
> wrote:
>
> > Please, give us the vm config file, placed in /etc/pve/qemu-server
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> > Em qui., 23 de abr. de 2020 às 16:38, Sivakumar SARAVANAN <
> > sivakumar.saravanan.jv@valeo-siemens.com> escreveu:
> >
> > > Hello,
> > >
> > > Not able to start the VM, where it is showing error as below
> > >
> > > start failed: hugepage allocation failed at
> > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541
> > >
> > >
> > > Mit freundlichen Grüßen / Best regards / Cordialement,
> > >
> > > Sivakumar SARAVANAN
> > >
> > > Externer Dienstleister für / External service provider for
> > > Valeo Siemens eAutomotive Germany GmbH
> > > Research & Development
> > > R & D SWENG TE 1 INFTE
> > > Frauenauracher Straße 85
> > > 91056 Erlangen, Germany
> > > Tel.: +49 9131 9892 
> > > Mobile: +49 176 7698 5441
> > > sivakumar.saravanan.jv@valeo-siemens.com
> > > valeo-siemens.com
> > >
> > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger
> > > Schwab, Michael
> > > Axmann; Chairman of the Supervisory Board: Hartmut Klötzer; Registered
> > > office: Erlangen, Germany; Commercial registry: Fürth, HRB 15655
> > >
> > > --
> > > *This e-mail message is intended for the internal use of the intended
> > > recipient(s) only.
> > > The information contained herein is
> > > confidential/privileged. Its disclosure or reproduction is strictly
> > > prohibited.
> > > If you are not the intended recipient, please inform the sender
> > > immediately, do not disclose it internally or to third parties and
> > destroy
> > > it.
> > >
> > > In the course of our business relationship and for business purposes
> > > only, Valeo may need to process some of your personal data.
> > > For more
> > > information, please refer to the Valeo Data Protection Statement and
> > > Privacy notice available on Valeo.com
> > > <https://www.valeo.com/en/ethics-and-compliance/#principes>*
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> --
> *This e-mail message is intended for the internal use of the intended
> recipient(s) only.
> The information contained herein is
> confidential/privileged. Its disclosure or reproduction is strictly
> prohibited.
> If you are not the intended recipient, please inform the sender
> immediately, do not disclose it internally or to third parties and destroy
> it.
>
> In the course of our business relationship and for business purposes
> only, Valeo may need to process some of your personal data.
> For more
> information, please refer to the Valeo Data Protection Statement and
> Privacy notice available on Valeo.com
> <https://www.valeo.com/en/ethics-and-compliance/#principes>*
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Not able to start the VM

2020-04-23 Thread Gilberto Nunes
Please, give us the vm config file, placed in /etc/pve/qemu-server


---
Gilberto Nunes Ferreira


Em qui., 23 de abr. de 2020 às 16:38, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> escreveu:

> Hello,
>
> Not able to start the VM, where it is showing error as below
>
> start failed: hugepage allocation failed at
> /usr/share/perl5/PVE/QemuServer/Memory.pm line 541
>
>
> Mit freundlichen Grüßen / Best regards / Cordialement,
>
> Sivakumar SARAVANAN
>
> Externer Dienstleister für / External service provider for
> Valeo Siemens eAutomotive Germany GmbH
> Research & Development
> R & D SWENG TE 1 INFTE
> Frauenauracher Straße 85
> 91056 Erlangen, Germany
> Tel.: +49 9131 9892 
> Mobile: +49 176 7698 5441
> sivakumar.saravanan.jv@valeo-siemens.com
> valeo-siemens.com
>
> Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger
> Schwab, Michael
> Axmann; Chairman of the Supervisory Board: Hartmut Klötzer; Registered
> office: Erlangen, Germany; Commercial registry: Fürth, HRB 15655
>
> --
> *This e-mail message is intended for the internal use of the intended
> recipient(s) only.
> The information contained herein is
> confidential/privileged. Its disclosure or reproduction is strictly
> prohibited.
> If you are not the intended recipient, please inform the sender
> immediately, do not disclose it internally or to third parties and destroy
> it.
>
> In the course of our business relationship and for business purposes
> only, Valeo may need to process some of your personal data.
> For more
> information, please refer to the Valeo Data Protection Statement and
> Privacy notice available on Valeo.com
> <https://www.valeo.com/en/ethics-and-compliance/#principes>*
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Create secondary pool on ceph servers..

2020-04-14 Thread Gilberto Nunes
Oh! Sorry Alwin.
I have some urgence to do this.
So this what I do...
First, I insert all HDDs, both SAS and SSD, into OSD tree.
Than, I check if the system could detect SSD as ssd and SAS as hdd, but
there's not difference! It's show all HDDs as hdd!
So, I change the class with this commands:
ceph osd crush rm-device-class osd.7
ceph osd crush set-device-class ssd osd.7
ceph osd crush rm-device-class osd.8
ceph osd crush set-device-class ssd osd.8
ceph osd crush rm-device-class osd.12
ceph osd crush set-device-class ssd osd.12
ceph osd crush rm-device-class osd.13
ceph osd crush set-device-class ssd osd.13
ceph osd crush rm-device-class osd.14
ceph osd crush set-device-class ssd osd.14

After that, ceph osd crush tree --show-shadow show me different types of
HDD...
ceph osd crush tree --show-shadow
ID  CLASS WEIGHT   TYPE NAME
-24   ssd  4.36394 root default~ssd
-20   ssd0 host pve1~ssd
-21   ssd0 host pve2~ssd
-17   ssd  0.87279 host pve3~ssd
 7   ssd  0.87279 osd.7
-18   ssd  0.87279 host pve4~ssd
 8   ssd  0.87279 osd.8
-19   ssd  0.87279 host pve5~ssd
12   ssd  0.87279 osd.12
-22   ssd  0.87279 host pve6~ssd
13   ssd  0.87279 osd.13
-23   ssd  0.87279 host pve7~ssd
14   ssd  0.87279 osd.14
-2   hdd 12.00282 root default~hdd
-10   hdd  1.09129 host pve1~hdd
 0   hdd  1.09129 osd.0
.
.

Then, I have created the rule

ceph osd crush rule create-replicated SSDPOOL default host ssd
Then create a POOL named SSDs

and then assigned the new pool
ceph osd pool set SSDs crush_rule SSDPOOL

It's seems to work properly...

What you thing?






---
Gilberto Nunes Ferreira


Em ter., 14 de abr. de 2020 às 15:30, Alwin Antreich 
escreveu:

> On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote:
> > Hi there
> >
> > I have 7 servers with PVE 6 all updated...
> > All servers has named pve1,pve2 and so on...
> > On pve3, pve4 and pve5 has SSD HD of 960GB.
> > So we decided to create a second pool that will use only this SSD.
> > I have readed Ceph CRUSH & device classes in order to do that!
> > So just to do things right, I need check that out:
> > 1 - first create OSD's with all HD, SAS and SSD
> > 2 - second create different pool with command bellow:
> > ruleset:
> >
> > ceph osd crush rule create-replicated  
> >  
> >
> > create pool
> >
> > ceph osd pool set  crush_rule 
> >
> >
> > Well, my question is: can I create OSD with all disk either SAS and
> > SSD, and then after that, create the ruleset and the pool?
> > Is this generated some impact during this operations??
> If your OSD types aren't mixed, then best create the rule for the
> existing pool first. All data will move, once the rule is applied. So,
> not much to movement if they are already on the correct OSD type.
>
> --
> Cheers,
> Alwin
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Create secondary pool on ceph servers..

2020-04-14 Thread Gilberto Nunes
Hi there

I have 7 servers with PVE 6 all updated...
All servers has named pve1,pve2 and so on...
On pve3, pve4 and pve5 has SSD HD of 960GB.
So we decided to create a second pool that will use only this SSD.
I have readed Ceph CRUSH & device classes in order to do that!
So just to do things right, I need check that out:
1 - first create OSD's with all HD, SAS and SSD
2 - second create different pool with command bellow:
ruleset:

ceph osd crush rule create-replicated  
 

create pool

ceph osd pool set  crush_rule 


Well, my question is: can I create OSD with all disk either SAS and
SSD, and then after that, create the ruleset and the pool?
Is this generated some impact during this operations??


Thanks a lot



Gilberto
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Some erros in Ceph - PVE6

2020-03-29 Thread Gilberto Nunes
Hi guys

I have installed Proxmox 6 and activate 3 servers with PVE 6 and Ceph.
In this 3 server, I have 4 HDD:
3 SAS 500GB
1 SAS 2 TB
However, we need to remove this 3 500GB of eache server... So I do out and
stop and I am waiting for rebalance, but is took too long...
Get this message:
Reduced data availability: 2 pgs inactive, 2 pgs down
pg 1.3a is down, acting [11,9,10]
pg 1.23a is down, acting [11,9,10]
(This 11,9,10 it's the 2 TB SAS HDD)
And
too many PGs per OSD (571 > max 250)
I already tried decrease the number of PG to 256
ceph osd pool set VMS pg_num 256
but it seem no effect att all:
ceph osd pool get VMS pg_num
pg_num: 571

Now, the sitution is that:

ceph -s
 cluster:
   id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2
   health: HEALTH_WARN
   Reduced data availability: 2 pgs inactive, 2 pgs down
   Degraded data redundancy: 6913/836472 objects degraded (0.826%),
18 pgs degraded, 19 pgs undersized
   too many PGs per OSD (571 > max 250)

 services:
   mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 51m)
   mgr: pve3(active, since 39h), standbys: pve5, pve7, pve6, pve4
   osd: 12 osds: 3 up (since 16m), 3 in (since 16m); 19 remapped pgs

 data:
   pools:   1 pools, 571 pgs
   objects: 278.82k objects, 1.1 TiB
   usage:   2.9 TiB used, 2.5 TiB / 5.5 TiB avail
   pgs: 0.350% pgs not active
6913/836472 objects degraded (0.826%)
550 active+clean
17  active+undersized+degraded+remapped+backfill_wait
2   down
1   active+undersized+degraded+remapped+backfilling
1   active+undersized+remapped+backfill_wait

 io:
   client:   15 KiB/s rd, 1.0 MiB/s wr, 3 op/s rd, 102 op/s wr
   recovery: 15 MiB/s, 3 objects/s

 progress:
   Rebalancing after osd.2 marked out
 [=.]
   Rebalancing after osd.7 marked out
 [..]
   Rebalancing after osd.6 marked out
 [==]


Do I need to do something or just leave Ceph do this work??

Thanks a lot!

Cheers
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Help with Ceph in PVE6

2020-03-28 Thread Gilberto Nunes
[UPDATE]
I notice that in [node] -> Ceph -> Pool in Used % column the values is
decrease over time! Perhaps need wait to adjust it and than see if
active+remapped+backfill_wait and  active+remapped+backfilling end it's
operations...
---
Gilberto Nunes Ferreira


Em sáb., 28 de mar. de 2020 às 11:04, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Help with Ceph in PVE 6
>
> Hi
>
> I have a ceph cluster created with 3 server
> ServerA has 3 SAS 512GB HDD and 1 SAS 1.3 TB
> ServerB has 3 SAS 512GB HDD and 1 SAS 1.3 TB
> ServerS has 3 SAS 512GB HDD and 1 SAS 1.3 TB
>
> I have one pool named VMS with size/min 3/2 and pg_num initially created
> with 256 but I have increased to 512 and an hour ago to 768 but it's see
> hava not effect on it...
>
> Ceph health apparently is ok but get this with ceph -s command:
>
> ceph -s
>   cluster:
> id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2
> health: HEALTH_OK
>
>   services:
> mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 15h)
> mgr: pve3(active, since 15h), standbys: pve4, pve5, pve7, pve6
> osd: 12 osds: 12 up (since 10m), 12 in (since 10m); 497 remapped pgs
>
>   data:
> pools:   1 pools, 768 pgs
> objects: 279.34k objects, 1.1 TiB
> usage:   3.0 TiB used, 6.2 TiB / 9.1 TiB avail
> pgs: 375654/838011 objects misplaced (44.827%)
>  494 active+remapped+backfill_wait
>  271 active+clean
>  3   active+remapped+backfilling
>
>   io:
> client:   140 KiB/s rd, 397 KiB/s wr, 12 op/s rd, 64 op/s wr
> recovery: 52 MiB/s, 14 objects/s
>
>
> Is there any action I can take to fix this?
>
> Thanks
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Help with Ceph in PVE6

2020-03-28 Thread Gilberto Nunes
Help with Ceph in PVE 6

Hi

I have a ceph cluster created with 3 server
ServerA has 3 SAS 512GB HDD and 1 SAS 1.3 TB
ServerB has 3 SAS 512GB HDD and 1 SAS 1.3 TB
ServerS has 3 SAS 512GB HDD and 1 SAS 1.3 TB

I have one pool named VMS with size/min 3/2 and pg_num initially created
with 256 but I have increased to 512 and an hour ago to 768 but it's see
hava not effect on it...

Ceph health apparently is ok but get this with ceph -s command:

ceph -s
  cluster:
id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2
health: HEALTH_OK

  services:
mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 15h)
mgr: pve3(active, since 15h), standbys: pve4, pve5, pve7, pve6
osd: 12 osds: 12 up (since 10m), 12 in (since 10m); 497 remapped pgs

  data:
pools:   1 pools, 768 pgs
objects: 279.34k objects, 1.1 TiB
usage:   3.0 TiB used, 6.2 TiB / 9.1 TiB avail
pgs: 375654/838011 objects misplaced (44.827%)
 494 active+remapped+backfill_wait
 271 active+clean
 3   active+remapped+backfilling

  io:
client:   140 KiB/s rd, 397 KiB/s wr, 12 op/s rd, 64 op/s wr
recovery: 52 MiB/s, 14 objects/s


Is there any action I can take to fix this?

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Use LVM from XenServerf into Proxmox 6

2020-03-26 Thread Gilberto Nunes
Of course there is data that I need get back otherwise I already had wiped
all partitions.


Em qui, 26 de mar de 2020 06:18, Roland  escreveu:

> maybe - but why?
>
> if there is no data to be preserved, i would zap it with wipefs and
> freshly re-initialize from proxmox webgui
>
> roland
>
> Am 25.03.20 um 16:00 schrieb Gilberto Nunes:
> > Hi there! I have installed Proxmox 6 in a former XenServer. Now I have
> this
> > LVM Physical Volume, that comes from XenServer... Is there any way to
> > convert this PV to use with Proxmox?? Thanks!
> >
> > pve3:~# pvs
> >/dev/sdb: open failed: No medium found
> >Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf.
> >PV VG Fmt
> Attr
> > PSizePFree
> >/dev/sda3  pvelvm2 a--
> >   <119.50g  <81.75g
> >/dev/sdc   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdd   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sde   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdf   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdg   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdh   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdi   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdj   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >/dev/sdk   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
> >   <419.18g <419.18g
> >[unknown]  VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-m
> >   <516.86g <516.86g
> > pve3:~# lvs
> >/dev/sdb: open failed: No medium found
> >Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf.
> >LV   VG Attr
>  LSize
> >   Pool Origin Data%  Meta%  Move Log
> >   Cpy%Sync Convert
> >MGT  VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b -wi-p-
> 4.00m
> >
> >
> >root pve-wi-ao
> 29.75g
> >
> >
> >swap pve-wi-ao
> 8.00g
> >
> >
> > pve3:~#
> > ---
> > Gilberto Nunes Ferreira
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Use LVM from XenServerf into Proxmox 6

2020-03-25 Thread Gilberto Nunes
Hi there! I have installed Proxmox 6 in a former XenServer. Now I have this
LVM Physical Volume, that comes from XenServer... Is there any way to
convert this PV to use with Proxmox?? Thanks!

pve3:~# pvs
  /dev/sdb: open failed: No medium found
  Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf.
  PV VG Fmt  Attr
PSizePFree
  /dev/sda3  pvelvm2 a--
 <119.50g  <81.75g
  /dev/sdc   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdd   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sde   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdf   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdg   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdh   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdi   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdj   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  /dev/sdk   VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a--
 <419.18g <419.18g
  [unknown]  VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-m
 <516.86g <516.86g
pve3:~# lvs
  /dev/sdb: open failed: No medium found
  Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf.
  LV   VG Attr   LSize
 Pool Origin Data%  Meta%  Move Log
 Cpy%Sync Convert
  MGT  VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b -wi-p-  4.00m


  root pve-wi-ao 29.75g


  swap pve    -wi-ao  8.00g


pve3:~#
---
Gilberto Nunes Ferreira
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] AMD integrated graphics passthrough

2020-03-24 Thread Gilberto Nunes
I guess

qm set VMID -hostpci0 00:02.0

You'll need figure out what host id has you GPU device... Follow the guide

---
Gilberto Nunes Ferreira






Em ter., 24 de mar. de 2020 às 09:39, petrus 
escreveu:

> > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
>
> Thank you, it's a very usefully guide.
> But it seems not mention that whether can I passthrough my only GPU. So, I
> guess you means the answer is Yes?
>
> Gilberto Nunes  于2020年3月24日周二 下午8:28写道:
>
> > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> >
> >
> >
> > Em ter., 24 de mar. de 2020 às 09:15, petrus 
> > escreveu:
> >
> > > Hi,
> > > My hardware is AMD Ryzen™ 5 3400G with Radeon™ RX Vega 11 Graphics,
> and I
> > > only have this one GPU.
> > > Can I passthrough this vega graphics to VM? I did a lot of search, but
> > > can't find an explicit answer.
> > >
> > > Any help? Thanks in advance.
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] AMD integrated graphics passthrough

2020-03-24 Thread Gilberto Nunes
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
---
Gilberto Nunes Ferreira





Em ter., 24 de mar. de 2020 às 09:15, petrus 
escreveu:

> Hi,
> My hardware is AMD Ryzen™ 5 3400G with Radeon™ RX Vega 11 Graphics, and I
> only have this one GPU.
> Can I passthrough this vega graphics to VM? I did a lot of search, but
> can't find an explicit answer.
>
> Any help? Thanks in advance.
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-03-09 Thread Gilberto Nunes
Solved after upgrade to PVE6.
---
Gilberto Nunes Ferreira





Em qui., 20 de fev. de 2020 às 12:26, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Any advice?
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua., 19 de fev. de 2020 às 11:01, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
>> HI there
>>
>> I change the bwlimit to 10 inside /etc/vzdump and vzdump works
>> normally for a couple of days and it's make happy.
>> Now, I have the error again! No logs, no explanation! Just error pure and
>> simple:
>>
>> 110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu)
>> 110: 2020-02-18 22:18:06 INFO: status = running
>> 110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup
>> 110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163
>> 110: 2020-02-18 22:18:07 INFO: include disk 'scsi0' 
>> 'local-lvm:vm-110-disk-0' 100G
>> 110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such volume 
>> 'local-lvm:vm-110-disk-0'
>>
>> 112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu)
>> 112: 2020-02-18 22:19:00 INFO: status = running
>> 112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup
>> 112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165
>> 112: 2020-02-18 22:19:01 INFO: include disk 'scsi0' 
>> 'local-lvm:vm-112-disk-0' 120G
>> 112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such volume 
>> 'local-lvm:vm-112-disk-0'
>>
>> 116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu)
>> 116: 2020-02-18 22:19:31 INFO: status = running
>> 116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup
>> 116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162
>> 116: 2020-02-18 22:19:32 INFO: include disk 'scsi0' 
>> 'local-lvm:vm-116-disk-0' 100G
>> 116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such volume 
>> 'local-lvm:vm-116-disk-0'
>>
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em sex., 14 de fev. de 2020 às 14:22, Gianni Milo <
>> gianni.mil...@gmail.com> escreveu:
>>
>>> If it's happening randomly, my best guess would be that it might be
>>> related
>>> to high i/o during the time frame that the backup takes place.
>>> Have you tried creating multiple backup schedules which will take place
>>> at
>>> different times ? Setting backup bandwidth limits might also help.
>>> Check the PVE administration guide for more details on this. You could
>>> check for any clues in syslog during the time that the failed backup
>>> takes
>>> place as well.
>>>
>>> G.
>>>
>>> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes >> >
>>> wrote:
>>>
>>> > HI guys
>>> >
>>> > Some problem but with two different vms...
>>> > I also update Proxmox still in 5.x series, but no changes... Now this
>>> > problem ocurrs twice, one night after other...
>>> > I am very concerned about it!
>>> > Please, Proxmox staff, is there something I can do to solve this issue?
>>> > Anybody alread do a bugzilla???
>>> >
>>> > Thanks
>>> > ---
>>> > Gilberto Nunes Ferreira
>>> >
>>> > (47) 3025-5907
>>> > (47) 99676-7530 - Whatsapp / Telegram
>>> >
>>> > Skype: gilberto.nunes36
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Em qui., 13 de fev. de 2020 às 19:53, Atila Vasconcelos <
>>> > ati...@lightspeed.ca> escreveu:
>>> >
>>> > > Hi,
>>> > >
>>> > > I had the same problem in the past and it repeats once a while
>>> its
>>> > > very random; I could not find any way to reproduce it.
>>> > >
>>> > > But as it happens... it will go away.
>>> > >
>>> > > When you are almost forgetting about it, it will come again ;)
>>> > >
>>> > > I just learned to ignore it (and do manually the backup when it
>>> fails)
>>> > >
>>> > > I see in proxmox 6.x it is less frequent (but

Re: [PVE-User] Error starting VM

2020-03-04 Thread Gilberto Nunes
systemctl status qemu.slice

---
Gilberto Nunes Ferreira


Em qua., 4 de mar. de 2020 às 11:23, Luis G. Coralle via pve-user <
pve-user@pve.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: "Luis G. Coralle" 
> To: PVE User List 
> Cc:
> Bcc:
> Date: Wed, 4 Mar 2020 11:21:57 -0300
> Subject: Error starting VM
> Hello!
> I'm trying to start windows 10 VM and I get:
> TASK ERROR: timeout waiting on systemd
> The VM was running fine. Last day it crash. I rebooted it and now it
> won't start...
>
> PVE version is 5.4-13
> Did someone have the same thing?
> Thanks
>
> --
> Luis G. Coralle
> Secretaría de TIC
> Facultad de Informática
> Universidad Nacional del Comahue
> (+54) 299-4490300 Int 647
>
>
>
> -- Forwarded message --
> From: "Luis G. Coralle via pve-user" 
> To: PVE User List 
> Cc: "Luis G. Coralle" 
> Bcc:
> Date: Wed, 4 Mar 2020 11:21:57 -0300
> Subject: [PVE-User] Error starting VM
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Console - black screen

2020-03-03 Thread Gilberto Nunes
Hi there again

Well After a while, the problems comes up again! We have plug a
notebook directly into Proxmox server and nevertheless the problem persists!
Anybody has some clue that can help with this issue!
---
Gilberto Nunes Ferreira




Em seg., 2 de mar. de 2020 às 15:38, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> It's seems the network don't support MTU 9000. I have disable it and back
> to normal...
>
> (syslog)
> Mar  2 14:04:38 tiger corosync[3995]:   [KNET  ] pmtud: This can be caused
> by this node interface MTU too big or a network device that does not
> support or has been misconfigured to manage MTU of this size, or packet
> loss. knet will continue to run but performances might be affected.
> ---
> Gilberto Nunes Ferreira
>
>
> Em seg., 2 de mar. de 2020 às 15:22, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
>> Yes! I have tried Firefox too! Same problem!
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em seg., 2 de mar. de 2020 às 15:15, Jonny Proud via pve-user <
>> pve-user@pve.proxmox.com> escreveu:
>>
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Jonny Proud 
>>> To: pve-user@pve.proxmox.com
>>> Cc:
>>> Bcc:
>>> Date: Mon, 2 Mar 2020 18:15:21 +
>>> Subject: Re: [PVE-User] VM Console - black screen
>>> Hi Gilberto,
>>>
>>> Mt 6.1.7 install works fine. Have you tried a different browser?
>>>
>>> On 02/03/2020 18:09, Gilberto Nunes wrote:
>>> > Hi there
>>> >
>>> > I have two pve server all fully update, running PVE 6.1-7.
>>> > Both servers are in cluster.
>>> > The problem is when select VM Console nothing happens, and turns in
>>> > black-gray screen!
>>> > Any one has the same behavior??
>>> > Thanks a lot
>>> > ---
>>> > Gilberto Nunes Ferreira
>>> > ___
>>> > pve-user mailing list
>>> > pve-user@pve.proxmox.com
>>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>> --
>>> Jonny
>>>
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Jonny Proud via pve-user 
>>> To: pve-user@pve.proxmox.com
>>> Cc: Jonny Proud 
>>> Bcc:
>>> Date: Mon, 2 Mar 2020 18:15:21 +
>>> Subject: Re: [PVE-User] VM Console - black screen
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Console - black screen

2020-03-02 Thread Gilberto Nunes
It's seems the network don't support MTU 9000. I have disable it and back
to normal...

(syslog)
Mar  2 14:04:38 tiger corosync[3995]:   [KNET  ] pmtud: This can be caused
by this node interface MTU too big or a network device that does not
support or has been misconfigured to manage MTU of this size, or packet
loss. knet will continue to run but performances might be affected.
---
Gilberto Nunes Ferreira


Em seg., 2 de mar. de 2020 às 15:22, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Yes! I have tried Firefox too! Same problem!
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em seg., 2 de mar. de 2020 às 15:15, Jonny Proud via pve-user <
> pve-user@pve.proxmox.com> escreveu:
>
>>
>>
>>
>> -- Forwarded message --
>> From: Jonny Proud 
>> To: pve-user@pve.proxmox.com
>> Cc:
>> Bcc:
>> Date: Mon, 2 Mar 2020 18:15:21 +
>> Subject: Re: [PVE-User] VM Console - black screen
>> Hi Gilberto,
>>
>> Mt 6.1.7 install works fine. Have you tried a different browser?
>>
>> On 02/03/2020 18:09, Gilberto Nunes wrote:
>> > Hi there
>> >
>> > I have two pve server all fully update, running PVE 6.1-7.
>> > Both servers are in cluster.
>> > The problem is when select VM Console nothing happens, and turns in
>> > black-gray screen!
>> > Any one has the same behavior??
>> > Thanks a lot
>> > ---
>> > Gilberto Nunes Ferreira
>> > ___
>> > pve-user mailing list
>> > pve-user@pve.proxmox.com
>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> --
>> Jonny
>>
>>
>>
>>
>> -- Forwarded message --
>> From: Jonny Proud via pve-user 
>> To: pve-user@pve.proxmox.com
>> Cc: Jonny Proud 
>> Bcc:
>> Date: Mon, 2 Mar 2020 18:15:21 +
>> Subject: Re: [PVE-User] VM Console - black screen
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Console - black screen

2020-03-02 Thread Gilberto Nunes
Yes! I have tried Firefox too! Same problem!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em seg., 2 de mar. de 2020 às 15:15, Jonny Proud via pve-user <
pve-user@pve.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: Jonny Proud 
> To: pve-user@pve.proxmox.com
> Cc:
> Bcc:
> Date: Mon, 2 Mar 2020 18:15:21 +
> Subject: Re: [PVE-User] VM Console - black screen
> Hi Gilberto,
>
> Mt 6.1.7 install works fine. Have you tried a different browser?
>
> On 02/03/2020 18:09, Gilberto Nunes wrote:
> > Hi there
> >
> > I have two pve server all fully update, running PVE 6.1-7.
> > Both servers are in cluster.
> > The problem is when select VM Console nothing happens, and turns in
> > black-gray screen!
> > Any one has the same behavior??
> > Thanks a lot
> > ---
> > Gilberto Nunes Ferreira
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> --
> Jonny
>
>
>
>
> -- Forwarded message --
> From: Jonny Proud via pve-user 
> To: pve-user@pve.proxmox.com
> Cc: Jonny Proud 
> Bcc:
> Date: Mon, 2 Mar 2020 18:15:21 +
> Subject: Re: [PVE-User] VM Console - black screen
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM Console - black screen

2020-03-02 Thread Gilberto Nunes
Hi there

I have two pve server all fully update, running PVE 6.1-7.
Both servers are in cluster.
The problem is when select VM Console nothing happens, and turns in
black-gray screen!
Any one has the same behavior??
Thanks a lot
---
Gilberto Nunes Ferreira
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-20 Thread Gilberto Nunes
Any advice?
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 19 de fev. de 2020 às 11:01, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> HI there
>
> I change the bwlimit to 10 inside /etc/vzdump and vzdump works
> normally for a couple of days and it's make happy.
> Now, I have the error again! No logs, no explanation! Just error pure and
> simple:
>
> 110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu)
> 110: 2020-02-18 22:18:06 INFO: status = running
> 110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup
> 110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163
> 110: 2020-02-18 22:18:07 INFO: include disk 'scsi0' 'local-lvm:vm-110-disk-0' 
> 100G
> 110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such volume 
> 'local-lvm:vm-110-disk-0'
>
> 112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu)
> 112: 2020-02-18 22:19:00 INFO: status = running
> 112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup
> 112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165
> 112: 2020-02-18 22:19:01 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 
> 120G
> 112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such volume 
> 'local-lvm:vm-112-disk-0'
>
> 116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu)
> 116: 2020-02-18 22:19:31 INFO: status = running
> 116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup
> 116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162
> 116: 2020-02-18 22:19:32 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 
> 100G
> 116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such volume 
> 'local-lvm:vm-116-disk-0'
>
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em sex., 14 de fev. de 2020 às 14:22, Gianni Milo 
> escreveu:
>
>> If it's happening randomly, my best guess would be that it might be
>> related
>> to high i/o during the time frame that the backup takes place.
>> Have you tried creating multiple backup schedules which will take place at
>> different times ? Setting backup bandwidth limits might also help.
>> Check the PVE administration guide for more details on this. You could
>> check for any clues in syslog during the time that the failed backup takes
>> place as well.
>>
>> G.
>>
>> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes 
>> wrote:
>>
>> > HI guys
>> >
>> > Some problem but with two different vms...
>> > I also update Proxmox still in 5.x series, but no changes... Now this
>> > problem ocurrs twice, one night after other...
>> > I am very concerned about it!
>> > Please, Proxmox staff, is there something I can do to solve this issue?
>> > Anybody alread do a bugzilla???
>> >
>> > Thanks
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> > (47) 3025-5907
>> > (47) 99676-7530 - Whatsapp / Telegram
>> >
>> > Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> >
>> > Em qui., 13 de fev. de 2020 às 19:53, Atila Vasconcelos <
>> > ati...@lightspeed.ca> escreveu:
>> >
>> > > Hi,
>> > >
>> > > I had the same problem in the past and it repeats once a while its
>> > > very random; I could not find any way to reproduce it.
>> > >
>> > > But as it happens... it will go away.
>> > >
>> > > When you are almost forgetting about it, it will come again ;)
>> > >
>> > > I just learned to ignore it (and do manually the backup when it fails)
>> > >
>> > > I see in proxmox 6.x it is less frequent (but still happening once a
>> > > while).
>> > >
>> > >
>> > > ABV
>> > >
>> > >
>> > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote:
>> > > > Yeah! Me too... This problem is pretty random... Let see next week!
>> > > > ---
>> > > > Gilberto Nunes Ferreira
>> > > >
>> > > > (47) 3025-5907
>> > > > (47) 99676-7530 - Whatsapp / Telegram
>> > > >
>> > > > Skype: gilberto.nunes36
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > Em qui.,

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-19 Thread Gilberto Nunes
HI there

I change the bwlimit to 10 inside /etc/vzdump and vzdump works normally
for a couple of days and it's make happy.
Now, I have the error again! No logs, no explanation! Just error pure and
simple:

110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu)
110: 2020-02-18 22:18:06 INFO: status = running
110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup
110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163
110: 2020-02-18 22:18:07 INFO: include disk 'scsi0'
'local-lvm:vm-110-disk-0' 100G
110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such
volume 'local-lvm:vm-110-disk-0'

112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu)
112: 2020-02-18 22:19:00 INFO: status = running
112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup
112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165
112: 2020-02-18 22:19:01 INFO: include disk 'scsi0'
'local-lvm:vm-112-disk-0' 120G
112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such
volume 'local-lvm:vm-112-disk-0'

116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu)
116: 2020-02-18 22:19:31 INFO: status = running
116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup
116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162
116: 2020-02-18 22:19:32 INFO: include disk 'scsi0'
'local-lvm:vm-116-disk-0' 100G
116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such
volume 'local-lvm:vm-116-disk-0'



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex., 14 de fev. de 2020 às 14:22, Gianni Milo 
escreveu:

> If it's happening randomly, my best guess would be that it might be related
> to high i/o during the time frame that the backup takes place.
> Have you tried creating multiple backup schedules which will take place at
> different times ? Setting backup bandwidth limits might also help.
> Check the PVE administration guide for more details on this. You could
> check for any clues in syslog during the time that the failed backup takes
> place as well.
>
> G.
>
> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes 
> wrote:
>
> > HI guys
> >
> > Some problem but with two different vms...
> > I also update Proxmox still in 5.x series, but no changes... Now this
> > problem ocurrs twice, one night after other...
> > I am very concerned about it!
> > Please, Proxmox staff, is there something I can do to solve this issue?
> > Anybody alread do a bugzilla???
> >
> > Thanks
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qui., 13 de fev. de 2020 às 19:53, Atila Vasconcelos <
> > ati...@lightspeed.ca> escreveu:
> >
> > > Hi,
> > >
> > > I had the same problem in the past and it repeats once a while its
> > > very random; I could not find any way to reproduce it.
> > >
> > > But as it happens... it will go away.
> > >
> > > When you are almost forgetting about it, it will come again ;)
> > >
> > > I just learned to ignore it (and do manually the backup when it fails)
> > >
> > > I see in proxmox 6.x it is less frequent (but still happening once a
> > > while).
> > >
> > >
> > > ABV
> > >
> > >
> > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote:
> > > > Yeah! Me too... This problem is pretty random... Let see next week!
> > > > ---
> > > > Gilberto Nunes Ferreira
> > > >
> > > > (47) 3025-5907
> > > > (47) 99676-7530 - Whatsapp / Telegram
> > > >
> > > > Skype: gilberto.nunes36
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Em qui., 13 de fev. de 2020 às 09:29, Eneko Lacunza <
> > elacu...@binovo.es>
> > > > escreveu:
> > > >
> > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of
> > ideas
> > > >> now, sorry!!! ;)
> > > >>
> > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribió:
> > > >>> I can assure you... the disk is there!
> > > >>>
> > > >>> pvesm list local-lvm
> > > >>> local-lvm:vm-101-disk-0raw 53687091200 101
> > > >>> local-lvm:vm-102-disk-0raw 536870912000 102
> > > >>> local-lvm:vm-103-disk-0 

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-14 Thread Gilberto Nunes
HI guys

Some problem but with two different vms...
I also update Proxmox still in 5.x series, but no changes... Now this
problem ocurrs twice, one night after other...
I am very concerned about it!
Please, Proxmox staff, is there something I can do to solve this issue?
Anybody alread do a bugzilla???

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 13 de fev. de 2020 às 19:53, Atila Vasconcelos <
ati...@lightspeed.ca> escreveu:

> Hi,
>
> I had the same problem in the past and it repeats once a while its
> very random; I could not find any way to reproduce it.
>
> But as it happens... it will go away.
>
> When you are almost forgetting about it, it will come again ;)
>
> I just learned to ignore it (and do manually the backup when it fails)
>
> I see in proxmox 6.x it is less frequent (but still happening once a
> while).
>
>
> ABV
>
>
> On 2020-02-13 4:42 a.m., Gilberto Nunes wrote:
> > Yeah! Me too... This problem is pretty random... Let see next week!
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qui., 13 de fev. de 2020 às 09:29, Eneko Lacunza 
> > escreveu:
> >
> >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas
> >> now, sorry!!! ;)
> >>
> >> El 13/2/20 a las 13:24, Gilberto Nunes escribió:
> >>> I can assure you... the disk is there!
> >>>
> >>> pvesm list local-lvm
> >>> local-lvm:vm-101-disk-0raw 53687091200 101
> >>> local-lvm:vm-102-disk-0raw 536870912000 102
> >>> local-lvm:vm-103-disk-0raw 322122547200 103
> >>> local-lvm:vm-104-disk-0raw 214748364800 104
> >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104
> >>> local-lvm:vm-105-disk-0raw 751619276800 105
> >>> local-lvm:vm-106-disk-0raw 161061273600 106
> >>> local-lvm:vm-107-disk-0raw 536870912000 107
> >>> local-lvm:vm-108-disk-0raw 214748364800 108
> >>> local-lvm:vm-109-disk-0raw 107374182400 109
> >>> local-lvm:vm-110-disk-0raw 107374182400 110
> >>> local-lvm:vm-111-disk-0raw 107374182400 111
> >>> local-lvm:vm-112-disk-0raw 128849018880 112
> >>> local-lvm:vm-113-disk-0raw 53687091200 113
> >>> local-lvm:vm-113-state-antes_balloon   raw 17704157184 113
> >>> local-lvm:vm-114-disk-0raw 128849018880 114
> >>> local-lvm:vm-115-disk-0raw 107374182400 115
> >>> local-lvm:vm-115-disk-1raw 53687091200 115
> >>> local-lvm:vm-116-disk-0raw 107374182400 116
> >>> local-lvm:vm-117-disk-0raw 107374182400 117
> >>> local-lvm:vm-118-disk-0raw 107374182400 118
> >>> local-lvm:vm-119-disk-0raw 26843545600 119
> >>> local-lvm:vm-121-disk-0raw 107374182400 121
> >>> local-lvm:vm-122-disk-0raw 107374182400 122
> >>> local-lvm:vm-123-disk-0raw 161061273600 123
> >>> local-lvm:vm-124-disk-0raw 107374182400 124
> >>> local-lvm:vm-125-disk-0raw 53687091200 125
> >>> local-lvm:vm-126-disk-0raw 32212254720 126
> >>> local-lvm:vm-127-disk-0raw 53687091200 127
> >>> local-lvm:vm-129-disk-0    raw 21474836480 129
> >>>
> >>> ls -l /dev/pve/vm-110-disk-0
> >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 ->
> ../dm-15
> >>>
> >>>
> >>> ---
> >>> Gilberto Nunes Ferreira
> >>>
> >>> (47) 3025-5907
> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>>
> >>> Skype: gilberto.nunes36
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza <
> elacu...@binovo.es>
> >>> escreveu:
> >>>
> >>>> What about:
> >>>>
> >>>> pvesm list local-lvm
> >>>> ls -l /dev/pve/vm-110-disk-0
> >>>>
> >>>> El 13/2/20 a las 12:40, Gilberto Nunes escr

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Gilberto Nunes
Yeah! Me too... This problem is pretty random... Let see next week!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 13 de fev. de 2020 às 09:29, Eneko Lacunza 
escreveu:

>
> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas
> now, sorry!!! ;)
>
> El 13/2/20 a las 13:24, Gilberto Nunes escribió:
> > I can assure you... the disk is there!
> >
> > pvesm list local-lvm
> > local-lvm:vm-101-disk-0raw 53687091200 101
> > local-lvm:vm-102-disk-0raw 536870912000 102
> > local-lvm:vm-103-disk-0raw 322122547200 103
> > local-lvm:vm-104-disk-0raw 214748364800 104
> > local-lvm:vm-104-state-LUKPLAS raw 17704157184 104
> > local-lvm:vm-105-disk-0raw 751619276800 105
> > local-lvm:vm-106-disk-0raw 161061273600 106
> > local-lvm:vm-107-disk-0raw 536870912000 107
> > local-lvm:vm-108-disk-0raw 214748364800 108
> > local-lvm:vm-109-disk-0raw 107374182400 109
> > local-lvm:vm-110-disk-0raw 107374182400 110
> > local-lvm:vm-111-disk-0raw 107374182400 111
> > local-lvm:vm-112-disk-0raw 128849018880 112
> > local-lvm:vm-113-disk-0raw 53687091200 113
> > local-lvm:vm-113-state-antes_balloon   raw 17704157184 113
> > local-lvm:vm-114-disk-0raw 128849018880 114
> > local-lvm:vm-115-disk-0raw 107374182400 115
> > local-lvm:vm-115-disk-1raw 53687091200 115
> > local-lvm:vm-116-disk-0raw 107374182400 116
> > local-lvm:vm-117-disk-0raw 107374182400 117
> > local-lvm:vm-118-disk-0raw 107374182400 118
> > local-lvm:vm-119-disk-0raw 26843545600 119
> > local-lvm:vm-121-disk-0raw 107374182400 121
> > local-lvm:vm-122-disk-0raw 107374182400 122
> > local-lvm:vm-123-disk-0raw 161061273600 123
> > local-lvm:vm-124-disk-0raw 107374182400 124
> > local-lvm:vm-125-disk-0raw 53687091200 125
> > local-lvm:vm-126-disk-0raw 32212254720 126
> > local-lvm:vm-127-disk-0raw 53687091200 127
> > local-lvm:vm-129-disk-0raw 21474836480 129
> >
> > ls -l /dev/pve/vm-110-disk-0
> > lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza 
> > escreveu:
> >
> >> What about:
> >>
> >> pvesm list local-lvm
> >> ls -l /dev/pve/vm-110-disk-0
> >>
> >> El 13/2/20 a las 12:40, Gilberto Nunes escribió:
> >>> Quite strange to say the least
> >>>
> >>>
> >>> ls /dev/pve/*
> >>> /dev/pve/root  /dev/pve/vm-109-disk-0
> >>> /dev/pve/vm-118-disk-0
> >>> /dev/pve/swap  /dev/pve/vm-110-disk-0
> >>> /dev/pve/vm-119-disk-0
> >>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0
> >>> /dev/pve/vm-121-disk-0
> >>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0
> >>> /dev/pve/vm-122-disk-0
> >>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0
> >>> /dev/pve/vm-123-disk-0
> >>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon
> >>>/dev/pve/vm-124-disk-0
> >>> /dev/pve/vm-104-state-LUKPLAS  /dev/pve/vm-114-disk-0
> >>> /dev/pve/vm-125-disk-0
> >>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0
> >>> /dev/pve/vm-126-disk-0
> >>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1
> >>> /dev/pve/vm-127-disk-0
> >>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0
> >>> /dev/pve/vm-129-disk-0
> >>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0
> >>>
> >>> ls /dev/mapper/
> >>> control   pve-vm--104--state--LUKPLAS
> >>>pve-vm--115--disk--1
> >>> iscsi-backup  pve-vm--105--disk--0
> >>> pve-vm--116--disk--0
> >>> mpathapve-vm--106--disk--0
> >>> pve-vm--117--disk--0
> >>> pve-data  p

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Gilberto Nunes
I can assure you... the disk is there!

pvesm list local-lvm
local-lvm:vm-101-disk-0raw 53687091200 101
local-lvm:vm-102-disk-0raw 536870912000 102
local-lvm:vm-103-disk-0raw 322122547200 103
local-lvm:vm-104-disk-0raw 214748364800 104
local-lvm:vm-104-state-LUKPLAS raw 17704157184 104
local-lvm:vm-105-disk-0raw 751619276800 105
local-lvm:vm-106-disk-0raw 161061273600 106
local-lvm:vm-107-disk-0raw 536870912000 107
local-lvm:vm-108-disk-0raw 214748364800 108
local-lvm:vm-109-disk-0raw 107374182400 109
local-lvm:vm-110-disk-0raw 107374182400 110
local-lvm:vm-111-disk-0raw 107374182400 111
local-lvm:vm-112-disk-0raw 128849018880 112
local-lvm:vm-113-disk-0raw 53687091200 113
local-lvm:vm-113-state-antes_balloon   raw 17704157184 113
local-lvm:vm-114-disk-0raw 128849018880 114
local-lvm:vm-115-disk-0raw 107374182400 115
local-lvm:vm-115-disk-1raw 53687091200 115
local-lvm:vm-116-disk-0raw 107374182400 116
local-lvm:vm-117-disk-0raw 107374182400 117
local-lvm:vm-118-disk-0raw 107374182400 118
local-lvm:vm-119-disk-0raw 26843545600 119
local-lvm:vm-121-disk-0raw 107374182400 121
local-lvm:vm-122-disk-0raw 107374182400 122
local-lvm:vm-123-disk-0raw 161061273600 123
local-lvm:vm-124-disk-0raw 107374182400 124
local-lvm:vm-125-disk-0raw 53687091200 125
local-lvm:vm-126-disk-0raw 32212254720 126
local-lvm:vm-127-disk-0raw 53687091200 127
local-lvm:vm-129-disk-0raw 21474836480 129

ls -l /dev/pve/vm-110-disk-0
lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza 
escreveu:

> What about:
>
> pvesm list local-lvm
> ls -l /dev/pve/vm-110-disk-0
>
> El 13/2/20 a las 12:40, Gilberto Nunes escribió:
> > Quite strange to say the least
> >
> >
> > ls /dev/pve/*
> > /dev/pve/root  /dev/pve/vm-109-disk-0
> > /dev/pve/vm-118-disk-0
> > /dev/pve/swap  /dev/pve/vm-110-disk-0
> > /dev/pve/vm-119-disk-0
> > /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0
> > /dev/pve/vm-121-disk-0
> > /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0
> > /dev/pve/vm-122-disk-0
> > /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0
> > /dev/pve/vm-123-disk-0
> > /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon
> >   /dev/pve/vm-124-disk-0
> > /dev/pve/vm-104-state-LUKPLAS  /dev/pve/vm-114-disk-0
> > /dev/pve/vm-125-disk-0
> > /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0
> > /dev/pve/vm-126-disk-0
> > /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1
> > /dev/pve/vm-127-disk-0
> > /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0
> > /dev/pve/vm-129-disk-0
> > /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0
> >
> > ls /dev/mapper/
> > control   pve-vm--104--state--LUKPLAS
> >   pve-vm--115--disk--1
> > iscsi-backup  pve-vm--105--disk--0
> > pve-vm--116--disk--0
> > mpathapve-vm--106--disk--0
> > pve-vm--117--disk--0
> > pve-data  pve-vm--107--disk--0
> > pve-vm--118--disk--0
> > pve-data_tdatapve-vm--108--disk--0
> > pve-vm--119--disk--0
> > pve-data_tmetapve-vm--109--disk--0
> > pve-vm--121--disk--0
> > pve-data-tpoolpve-vm--110--disk--0
> > pve-vm--122--disk--0
> > pve-root  pve-vm--111--disk--0
> > pve-vm--123--disk--0
> > pve-swap  pve-vm--112--disk--0
> > pve-vm--124--disk--0
> > pve-vm--101--disk--0  pve-vm--113--disk--0
> > pve-vm--125--disk--0
> > pve-vm--102--disk--0  pve-vm--113--state--antes_balloon
> >   pve-vm--126--disk--0
> > pve-vm--103--disk--0  pve-vm--114--disk--0
> > pve-vm--127--disk--0
> > pve-vm--104--disk--0  pve-vm--115--disk--0
> > pve-vm--129--disk--0
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qui., 13 de fev. de 2020 às 08:38, Eneko Lacunza 
> > escreveu:
> >
> >> It's quite strange, what about "ls /dev/pve/*"?
> >>

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Gilberto Nunes
Quite strange to say the least


ls /dev/pve/*
/dev/pve/root  /dev/pve/vm-109-disk-0
/dev/pve/vm-118-disk-0
/dev/pve/swap  /dev/pve/vm-110-disk-0
/dev/pve/vm-119-disk-0
/dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0
/dev/pve/vm-121-disk-0
/dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0
/dev/pve/vm-122-disk-0
/dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0
/dev/pve/vm-123-disk-0
/dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon
 /dev/pve/vm-124-disk-0
/dev/pve/vm-104-state-LUKPLAS  /dev/pve/vm-114-disk-0
/dev/pve/vm-125-disk-0
/dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0
/dev/pve/vm-126-disk-0
/dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1
/dev/pve/vm-127-disk-0
/dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0
/dev/pve/vm-129-disk-0
/dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0

ls /dev/mapper/
control   pve-vm--104--state--LUKPLAS
 pve-vm--115--disk--1
iscsi-backup  pve-vm--105--disk--0
pve-vm--116--disk--0
mpathapve-vm--106--disk--0
pve-vm--117--disk--0
pve-data  pve-vm--107--disk--0
pve-vm--118--disk--0
pve-data_tdatapve-vm--108--disk--0
pve-vm--119--disk--0
pve-data_tmetapve-vm--109--disk--0
pve-vm--121--disk--0
pve-data-tpoolpve-vm--110--disk--0
pve-vm--122--disk--0
pve-root  pve-vm--111--disk--0
pve-vm--123--disk--0
pve-swap  pve-vm--112--disk--0
pve-vm--124--disk--0
pve-vm--101--disk--0  pve-vm--113--disk--0
pve-vm--125--disk--0
pve-vm--102--disk--0  pve-vm--113--state--antes_balloon
 pve-vm--126--disk--0
pve-vm--103--disk--0  pve-vm--114--disk--0
pve-vm--127--disk--0
pve-vm--104--disk--0  pve-vm--115--disk--0
pve-vm--129--disk--0


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 13 de fev. de 2020 às 08:38, Eneko Lacunza 
escreveu:

> It's quite strange, what about "ls /dev/pve/*"?
>
> El 13/2/20 a las 12:18, Gilberto Nunes escribió:
> > n: Thu Feb 13 07:06:19 2020
> > a2web:~# lvs
> >LV   VGAttr   LSize   Pool Origin
> > Data%  Meta%  Move Log Cpy%Sync Convert
> >backup   iscsi -wi-ao   1.61t
> >
> >data pve   twi-aotz--   3.34t
> > 88.21  9.53
> >root pve   -wi-ao  96.00g
> >
> >snap_vm-104-disk-0_LUKPLAS   pve   Vri---tz-k 200.00g data
> > vm-104-disk-0
> >snap_vm-113-disk-0_antes_balloon pve   Vri---tz-k  50.00g data
> > vm-113-disk-0
> >swap pve   -wi-ao   8.00g
> >
> >vm-101-disk-0pve   Vwi-aotz--  50.00g data
> >  24.17
> >vm-102-disk-0pve   Vwi-aotz-- 500.00g data
> >  65.65
> >vm-103-disk-0pve   Vwi-aotz-- 300.00g data
> >  37.28
> >vm-104-disk-0pve   Vwi-aotz-- 200.00g data
> >  17.87
> >vm-104-state-LUKPLAS pve   Vwi-a-tz--  16.49g data
> >  35.53
> >vm-105-disk-0pve   Vwi-aotz-- 700.00g data
> >  90.18
> >vm-106-disk-0pve   Vwi-aotz-- 150.00g data
> >  93.55
> >vm-107-disk-0pve   Vwi-aotz-- 500.00g data
> >  98.20
> >vm-108-disk-0pve   Vwi-aotz-- 200.00g data
> >  98.02
> >vm-109-disk-0pve   Vwi-aotz-- 100.00g data
> >  93.68
> >vm-110-disk-0pve   Vwi-aotz-- 100.00g data
> >  34.55
> >vm-111-disk-0pve   Vwi-aotz-- 100.00g data
> >  79.03
> >vm-112-disk-0pve   Vwi-aotz-- 120.00g data
> >  93.78
> >vm-113-disk-0pve   Vwi-aotz--  50.00g data
> >  65.42
> >vm-113-state-antes_balloon   pve   Vwi-a-tz--  16.49g data
> >  43.64
> >vm-114-disk-0pve   Vwi-aotz-- 120.00g data
> >  100.00
> >vm-115-disk-0pve   Vwi-a-tz-- 100.00g data
> >  70.28
> >vm-115-disk-1pve   Vwi-a-tz--  50.00g data
> >  0.00
> >vm-116-disk-0pve   Vwi-aotz-- 100.00g data
> >  26.34
> >vm-117-disk-0pve   Vwi-aotz-- 100.00g data
> >  100.00
> >vm-118-disk-0pve   Vwi-aotz-- 100.00g data
> >  100.00
> >vm-119-disk-0pve   Vwi-aotz--  25.00g data
> >  18.42
> >vm-121-disk-0pve   Vwi-aotz-

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Gilberto Nunes
n: Thu Feb 13 07:06:19 2020
a2web:~# lvs
  LV   VGAttr   LSize   Pool Origin
   Data%  Meta%  Move Log Cpy%Sync Convert
  backup   iscsi -wi-ao   1.61t

  data pve   twi-aotz--   3.34t
   88.21  9.53
  root pve   -wi-ao  96.00g

  snap_vm-104-disk-0_LUKPLAS   pve   Vri---tz-k 200.00g data
vm-104-disk-0
  snap_vm-113-disk-0_antes_balloon pve   Vri---tz-k  50.00g data
vm-113-disk-0
  swap pve   -wi-ao   8.00g

  vm-101-disk-0pve   Vwi-aotz--  50.00g data
24.17
  vm-102-disk-0pve   Vwi-aotz-- 500.00g data
65.65
  vm-103-disk-0pve   Vwi-aotz-- 300.00g data
37.28
  vm-104-disk-0pve   Vwi-aotz-- 200.00g data
17.87
  vm-104-state-LUKPLAS pve   Vwi-a-tz--  16.49g data
35.53
  vm-105-disk-0pve   Vwi-aotz-- 700.00g data
90.18
  vm-106-disk-0pve   Vwi-aotz-- 150.00g data
93.55
  vm-107-disk-0pve   Vwi-aotz-- 500.00g data
98.20
  vm-108-disk-0pve   Vwi-aotz-- 200.00g data
98.02
  vm-109-disk-0pve   Vwi-aotz-- 100.00g data
93.68
  vm-110-disk-0pve   Vwi-aotz-- 100.00g data
34.55
  vm-111-disk-0pve   Vwi-aotz-- 100.00g data
79.03
  vm-112-disk-0pve   Vwi-aotz-- 120.00g data
93.78
  vm-113-disk-0pve   Vwi-aotz--  50.00g data
65.42
  vm-113-state-antes_balloon   pve   Vwi-a-tz--  16.49g data
43.64
  vm-114-disk-0pve   Vwi-aotz-- 120.00g data
100.00
  vm-115-disk-0pve   Vwi-a-tz-- 100.00g data
70.28
  vm-115-disk-1pve   Vwi-a-tz--  50.00g data
0.00
  vm-116-disk-0pve   Vwi-aotz-- 100.00g data
26.34
  vm-117-disk-0pve   Vwi-aotz-- 100.00g data
100.00
  vm-118-disk-0pve   Vwi-aotz-- 100.00g data
100.00
  vm-119-disk-0pve   Vwi-aotz--  25.00g data
18.42
  vm-121-disk-0pve   Vwi-aotz-- 100.00g data
23.76
  vm-122-disk-0pve   Vwi-aotz-- 100.00g data
100.00
  vm-123-disk-0pve   Vwi-aotz-- 150.00g data
37.89
  vm-124-disk-0pve   Vwi-aotz-- 100.00g data
30.73
  vm-125-disk-0pve   Vwi-aotz--  50.00g data
9.02
  vm-126-disk-0pve   Vwi-aotz--  30.00g data
99.72
  vm-127-disk-0pve   Vwi-aotz--  50.00g data
10.79
  vm-129-disk-0pve   Vwi-aotz--  20.00g data
45.04

cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

iscsi: iscsi
portal some-portal
target some-target
content images

lvm: iscsi-lvm
vgname iscsi
base iscsi:0.0.0.scsi-mpatha
content rootdir,images
shared 1

dir: backup
path /backup
content images,rootdir,iso,backup
maxfiles 3
shared 0
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 13 de fev. de 2020 às 08:11, Eneko Lacunza 
escreveu:

> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"?
>
> El 13/2/20 a las 11:13, Gilberto Nunes escribió:
> > HI all
> >
> > Still in trouble with this issue
> >
> > cat daemon.log | grep "Feb 12 22:10"
> > Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication
> runner...
> > Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner.
> > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110
> (qemu)
> > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no
> > such volume 'local-lvm:vm-110-disk-0'
> >
> > syslog
> > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110
> (qemu)
> > Feb 12 22:10:06 a2web qm[18860]:  update VM 110: -lock backup
> > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no
> > such volume 'local-lvm:vm-110-disk-0'
> >
> > pveversion
> > pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve)
> >
> > proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
> > pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
> > pve-kernel-4.15: 5.4-12
> > pve-kernel-4.15.18-24-pve: 4.15.18-52
> > pve-kernel-4.15.18-12-pve: 4.15.18-36
> > corosync: 2.4.4-pve1
> > criu: 2.11.1-1~bpo90
> > glusterfs-clien

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Gilberto Nunes
HI all

Still in trouble with this issue

cat daemon.log | grep "Feb 12 22:10"
Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication runner...
Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner.
Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu)
Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no
such volume 'local-lvm:vm-110-disk-0'

syslog
Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu)
Feb 12 22:10:06 a2web qm[18860]:  update VM 110: -lock backup
Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no
such volume 'local-lvm:vm-110-disk-0'

pveversion
pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve)

proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2


Some help??? Sould I upgrade the server to 6.x??

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jan. de 2020 às 10:10, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi there
>
> I got a strage error last night. Vzdump complain about the
> disk no exist or lvm volume in this case but the volume exist, indeed!
> In the morning I have do a manually backup and it's working fine...
> Any advice?
>
> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu)
> 112: 2020-01-29 22:20:02 INFO: status = running
> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup
> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165
> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 
> 120G
> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such volume 
> 'local-lvm:vm-112-disk-0'
>
> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu)
> 116: 2020-01-29 22:20:23 INFO: status = running
> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup
> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162
> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 
> 100G
> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such volume 
> 'local-lvm:vm-116-disk-0'
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] CPU Pinning...

2020-02-03 Thread Gilberto Nunes
Hi there

Is there any way to do cpu pinning in Proxmox 6???

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-01-30 Thread Gilberto Nunes
Hi Marco...

I already got error from timeout storage, but is very different from that
one...
Very unusual indeed!
Thanks for reply!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jan. de 2020 às 10:33, Marco Gaiarin 
escreveu:

> Mandi! Gilberto Nunes
>   In chel di` si favelave...
>
> > Any advice?
>
> Happen 'spot' also here; i'm convinced that, under some specific
> circumstances, eg, high load on the SAN, backup 'timeout' and the error
> reported is that, a bit misleading indeed.
>
> FYI.
>
> --
> dott. Marco Gaiarin GNUPG Key ID:
> 240A3D66
>   Associazione ``La Nostra Famiglia''
> http://www.lanostrafamiglia.it/
>   Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento
> (PN)
>   marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f
> +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
>   http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VZdump: No such disk, but the disk is there!

2020-01-30 Thread Gilberto Nunes
Hi there

I got a strage error last night. Vzdump complain about the
disk no exist or lvm volume in this case but the volume exist, indeed!
In the morning I have do a manually backup and it's working fine...
Any advice?

112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu)
112: 2020-01-29 22:20:02 INFO: status = running
112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup
112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165
112: 2020-01-29 22:20:03 INFO: include disk 'scsi0'
'local-lvm:vm-112-disk-0' 120G
112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such
volume 'local-lvm:vm-112-disk-0'

116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu)
116: 2020-01-29 22:20:23 INFO: status = running
116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup
116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162
116: 2020-01-29 22:20:24 INFO: include disk 'scsi0'
'local-lvm:vm-116-disk-0' 100G
116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such
volume 'local-lvm:vm-116-disk-0'

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] HP RX Backup freezing the server

2020-01-21 Thread Gilberto Nunes
Hi there

I have PVE 6 all update and working, however when I plug this device in the
server, it's just freeze and I need to reboot the entire server...
Is there any incompatibility???
https://cc.cnetcontent.com/vcs/hp-ent/inline-content/7A/9/A/9A1E8A6A3F2FCF7EF7DA5EC974A81ED71CC62190_feature.jpg

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Problem w Storage iSCSI

2020-01-21 Thread Gilberto Nunes
Hi there

I get this messagem in dmesg
rdac retrying mode select command

And the server is slow...
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] qm import stuck

2020-01-21 Thread Gilberto Nunes
Hi there
I am using pve 6 all update (pve-no-sub)...
I have a Dell Storage MD3200i and have set up multipath...
But when try to import a disk into lvm created over multipath, I stuck:
qm importdisk 108 vm-107-disk-0.raw VMDATA
importing disk 'vm-107-disk-0.raw' to VM 108 ...
trying to acquire cfs lock 'storage-VMDATA' ...
trying to acquire cfs lock 'storage-VMDATA' ...
  Logical volume "vm-108-disk-0" created.
transferred: 0 bytes remaining: 107374182400 bytes total: 107374182400
bytes progression: 0.00 %

I get some messages in dmesg:

74394] sd 15:0:0:0: rdac: array Storage_Abatex, ctlr 0, queueing
MODE_SELECT command
[63033.074899] sd 15:0:0:0: rdac: array Storage_Abatex, ctlr 0, MODE_SELECT
returned with sense 06/94/01
[63033.074902] sd 15:0:0:0: rdac: array Storage_Abatex, ctlr 0, retrying
MODE_SELECT command
[63033.519329] sd 16:0:0:0: rdac: array Storage_Abatex, ctlr 0, queueing
MODE_SELECT command
[63033.519835] sd 16:0:0:0: rdac: array Storage_Abatex, ctlr 0, MODE_SELECT
returned with sense 06/94/01
[63033.519836] sd 16:0:0:0: rdac: array Storage_Abatex, ctlr 0, retrying
MODE_SELECT command

This is the multpath file:
cat /etc/multipath.conf
defaults {
polling_interval 3
path_selector "round-robin 0"
max_fds "max"
path_grouping_policy multibus
uid_attribute "ID_SERIAL"
rr_min_io 100
failback immediate
no_path_retry queue
}

blacklist {
wwid .*
devnode "^sda"
device {
vendor "DELL"
product "Universal Xport"
}
}
devices {
device {
vendor "DELL"
product "MD32xxi"
path_grouping_policy group_by_prio
prio rdac
#polling_interval 5
path_checker rdac
path_selector "round-robin 0"
hardware_handler "1 rdac"
failback immediate
features "2 pg_init_retries 50"
no_path_retry 30
rr_min_io 100
#prio_callout "/sbin/mpath_prio_rdac /dev/%n"
}
}
blacklist_exceptions {
wwid "361418770003f0d280a6c5df0a959"
}
multipaths {
multipath {
wwid "361418770003f0d280a6c5df0a959"
alias mpath0
}

}

The information regard Dell MD32xxi in the file above I add with the server
online... Do I need to reboot the servers or just systemctl restart
multipath-whatever???

Thanks for any help

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ตอบกลับ: Need help with iscsi Dell MD3200i and MULTIPATH

2019-12-12 Thread Gilberto Nunes
Hi there

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 12 de dez. de 2019 às 00:19, Kriengsak Panphotong 
escreveu:

> Hi list,
>
> I found solution on this link.
>
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932307
>
> you need add :   "find_multipaths yes" to /etc/multipath.conf
>
> # multipath.conf example
> defaults {
> polling_interval2
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> uid_attribute   ID_SERIAL
> rr_min_io   100
> failbackimmediate
> no_path_retry   queue
> user_friendly_names yes
> find_multipaths yes
> }
>
> after add , restart multipath-tools (/etc/init.d/multipath-tools restart)
> its work great for me.
>
> --
> *จาก:* pve-user  ในนามของ Gilberto
> Nunes 
> *ส่ง:* 12 ธันวาคม 2562 02:38
> *ถึง:* pve-user@pve.proxmox.com 
> *ชื่อเรื่อง:* Re: [PVE-User] Need help with iscsi Dell MD3200i and
> MULTIPATH
>
> But still I am in doubt... This is some Debian Buster bug
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua., 11 de dez. de 2019 às 16:25, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
> > After spending hours trying to fix this, I finally figure it out:
> > After fill the file /etc/multipath/wwids with the wwids of the iscsi
> disk,
> > like this:
> > cat /etc/multipath/wwids
> > # Multipath wwids, Version : 1.0
> > # NOTE: This file is automatically maintained by multipath and
> multipathd.
> > # You should not need to edit this file in normal circumstances.
> > #
> > # Valid WWIDs:
> > /361418770003f0d280a6c5df0a959/
> >
> > Now everything is work properly
> > multipath -ll
> > mylun (361418770003f0d280a6c5df0a959) dm-2 DELL,MD32xxi
> > size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
> > rdac' wp=rw
> > `-+- policy='round-robin 0' prio=9 status=active
> > |- 15:0:0:0 sdb 8:16 active ready running
> > `- 16:0:0:0 sdc 8:32 active ready running
> > ls /dev/mapper/
> > control mylun pve-root pve-swap
> >
> > I don't know what's happen here, but in my case I needed to do this
> > manually!
> > This file in /etc/mulitpath/wwids should not be full fill automatically
> by
> > multipath??? So weird!
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qua., 11 de dez. de 2019 às 15:55, Gilberto Nunes <
> > gilberto.nune...@gmail.com> escreveu:
> >
> >> Hi there
> >>
> >> I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i.
> >> I have successful deploy the storage to give access to block device via
> 2
> >> different IPs
> >> How ever, I can not deploy multipath properly.
> >> When I try multipath -ll give me nothing or multipath -v3 tells me that
> >> the wwid is not in the file
> >> BTW, I have used this site as reference:
> >> https://pve.proxmox.com/wiki/ISCSI_Multipath
> >> <https://pve.proxmox.com/wiki/ISCSI_Multipath#Dell>
> >>
> >>
> >>
> https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/
> >>
> >>
> >> Here's the /etc/multipath.conf:
> >> defaults {
> >> polling_interval 3
> >> path_selector "round-robin 0"
> >> max_fds "max"
> >> path_grouping_policy multibus
> >> uid_attribute "ID_SERIAL"
> >> rr_min_io 100
> >> failback immediate
> >> no_path_retry queue
> >> }
> >>
> >> blacklist {
> >> wwid .*
> >> devnode "^sda"
> >> device {
> >> vendor "DELL"
> >> product "Universal Xport"
> >> }
> >> }
> >> devices {
> >> device {
> >> vendor "DELL"
> >> product "MD32xxi"
> >> 

[PVE-User] Predicable name for ethernet devices...

2019-12-12 Thread Gilberto Nunes
Hi guys

Is there any negative effect if I change the predicable name for ethernet
devices, from eno1, eno2, etc, to eth0, eth1 

Thanks a lot

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH

2019-12-11 Thread Gilberto Nunes
But still I am in doubt... This is some Debian Buster bug


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 11 de dez. de 2019 às 16:25, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> After spending hours trying to fix this, I finally figure it out:
> After fill the file /etc/multipath/wwids with the wwids of the iscsi disk,
> like this:
> cat /etc/multipath/wwids
> # Multipath wwids, Version : 1.0
> # NOTE: This file is automatically maintained by multipath and multipathd.
> # You should not need to edit this file in normal circumstances.
> #
> # Valid WWIDs:
> /361418770003f0d280a6c5df0a959/
>
> Now everything is work properly
> multipath -ll
> mylun (361418770003f0d280a6c5df0a959) dm-2 DELL,MD32xxi
> size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
> rdac' wp=rw
> `-+- policy='round-robin 0' prio=9 status=active
> |- 15:0:0:0 sdb 8:16 active ready running
> `- 16:0:0:0 sdc 8:32 active ready running
> ls /dev/mapper/
> control mylun pve-root pve-swap
>
> I don't know what's happen here, but in my case I needed to do this
> manually!
> This file in /etc/mulitpath/wwids should not be full fill automatically by
> multipath??? So weird!
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua., 11 de dez. de 2019 às 15:55, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
>> Hi there
>>
>> I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i.
>> I have successful deploy the storage to give access to block device via 2
>> different IPs
>> How ever, I can not deploy multipath properly.
>> When I try multipath -ll give me nothing or multipath -v3 tells me that
>> the wwid is not in the file
>> BTW, I have used this site as reference:
>> https://pve.proxmox.com/wiki/ISCSI_Multipath
>> <https://pve.proxmox.com/wiki/ISCSI_Multipath#Dell>
>>
>>
>> https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/
>>
>>
>> Here's the /etc/multipath.conf:
>> defaults {
>> polling_interval 3
>> path_selector "round-robin 0"
>> max_fds "max"
>> path_grouping_policy multibus
>> uid_attribute "ID_SERIAL"
>> rr_min_io 100
>> failback immediate
>> no_path_retry queue
>> }
>>
>> blacklist {
>> wwid .*
>> devnode "^sda"
>> device {
>> vendor "DELL"
>> product "Universal Xport"
>> }
>> }
>> devices {
>> device {
>> vendor "DELL"
>> product "MD32xxi"
>> path_grouping_policy group_by_prio
>> prio rdac
>> #polling_interval 5
>> path_checker rdac
>> path_selector "round-robin 0"
>> hardware_handler "1 rdac"
>> failback immediate
>> features "2 pg_init_retries 50"
>> no_path_retry 30
>> rr_min_io 100
>> #prio_callout "/sbin/mpath_prio_rdac /dev/%n"
>> }
>> }
>> blacklist_exceptions {
>> wwid "361418770003f0d280a6c5df0a959"
>> }
>> multipaths {
>> multipath {
>> wwid "361418770003f0d280a6c5df0a959"
>> alias mpath0
>> }
>>
>> }
>>  lsscsi
>> [0:0:0:0]diskHGST HUS726040ALS210  KU27  /dev/sda
>> [14:0:0:0]   cd/dvd  PLDS DVD+-RW DU-8A5LH 6D5M  /dev/sr0
>> [15:0:0:0]   diskDELL MD32xxi  0820  /dev/sdb
>> [15:0:0:31]  diskDELL Universal Xport  0820  -
>> [16:0:0:0]   diskDELL MD32xxi  0820  /dev/sdc
>> [16:0:0:31]  diskDELL Universal Xport  0820  -
>>
>> multipath -v3
>> Dec 11 15:53:42 | set open fds limit to 1048576/1048576
>> Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker
>> Dec 11 15:53:42 | checker tur: message table size = 3
>> Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer
>> Dec 11 15:53:42 | foreign library "nvme" loaded successfully
>> Dec 11 15:53:42 | sr0: blacklisted, udev

Re: [PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH

2019-12-11 Thread Gilberto Nunes
After spending hours trying to fix this, I finally figure it out:
After fill the file /etc/multipath/wwids with the wwids of the iscsi disk,
like this:
cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/361418770003f0d280a6c5df0a959/

Now everything is work properly
multipath -ll
mylun (361418770003f0d280a6c5df0a959) dm-2 DELL,MD32xxi
size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
rdac' wp=rw
`-+- policy='round-robin 0' prio=9 status=active
|- 15:0:0:0 sdb 8:16 active ready running
`- 16:0:0:0 sdc 8:32 active ready running
ls /dev/mapper/
control mylun pve-root pve-swap

I don't know what's happen here, but in my case I needed to do this
manually!
This file in /etc/mulitpath/wwids should not be full fill automatically by
multipath??? So weird!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 11 de dez. de 2019 às 15:55, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi there
>
> I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i.
> I have successful deploy the storage to give access to block device via 2
> different IPs
> How ever, I can not deploy multipath properly.
> When I try multipath -ll give me nothing or multipath -v3 tells me that
> the wwid is not in the file
> BTW, I have used this site as reference:
> https://pve.proxmox.com/wiki/ISCSI_Multipath
> <https://pve.proxmox.com/wiki/ISCSI_Multipath#Dell>
>
>
> https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/
>
>
> Here's the /etc/multipath.conf:
> defaults {
> polling_interval 3
> path_selector "round-robin 0"
> max_fds "max"
> path_grouping_policy multibus
> uid_attribute "ID_SERIAL"
> rr_min_io 100
> failback immediate
> no_path_retry queue
> }
>
> blacklist {
> wwid .*
> devnode "^sda"
> device {
> vendor "DELL"
> product "Universal Xport"
> }
> }
> devices {
> device {
> vendor "DELL"
> product "MD32xxi"
> path_grouping_policy group_by_prio
> prio rdac
> #polling_interval 5
> path_checker rdac
> path_selector "round-robin 0"
> hardware_handler "1 rdac"
> failback immediate
> features "2 pg_init_retries 50"
> no_path_retry 30
> rr_min_io 100
> #prio_callout "/sbin/mpath_prio_rdac /dev/%n"
> }
> }
> blacklist_exceptions {
> wwid "361418770003f0d280a6c5df0a959"
> }
> multipaths {
> multipath {
> wwid "361418770003f0d280a6c5df0a959"
> alias mpath0
> }
>
> }
>  lsscsi
> [0:0:0:0]diskHGST HUS726040ALS210  KU27  /dev/sda
> [14:0:0:0]   cd/dvd  PLDS DVD+-RW DU-8A5LH 6D5M  /dev/sr0
> [15:0:0:0]   diskDELL MD32xxi  0820  /dev/sdb
> [15:0:0:31]  diskDELL Universal Xport  0820  -
> [16:0:0:0]   diskDELL MD32xxi  0820  /dev/sdc
> [16:0:0:31]  diskDELL Universal Xport  0820  -
>
> multipath -v3
> Dec 11 15:53:42 | set open fds limit to 1048576/1048576
> Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker
> Dec 11 15:53:42 | checker tur: message table size = 3
> Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer
> Dec 11 15:53:42 | foreign library "nvme" loaded successfully
> Dec 11 15:53:42 | sr0: blacklisted, udev property missing
> Dec 11 15:53:42 | sda: udev property ID_WWN whitelisted
> Dec 11 15:53:42 | sda: device node name blacklisted
> Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted
> Dec 11 15:53:42 | sdb: mask = 0x1f
> Dec 11 15:53:42 | sdb: dev_t = 8:16
> Dec 11 15:53:42 | sdb: size = 2097152
> Dec 11 15:53:42 | sdb: vendor = DELL
> Dec 11 15:53:42 | sdb: product = MD32xxi
> Dec 11 15:53:42 | sdb: rev = 0820
> Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820
> Dec 11 15:53:42 | sdb: h:b:t:l = 15:0:0:0
> Dec 11 15:53:42 | sdb: tgt_node_name =
> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d285bbca139
> Dec 11 15:53:42 | sdb: path state = running
> Dec 11 15:53:42 | sdb: 1011 cyl, 

[PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH

2019-12-11 Thread Gilberto Nunes
al = 3CS00II
Dec 11 15:53:42 | sdc: get_state
Dec 11 15:53:42 | sdc: detect_checker = yes (setting: multipath internal)
Dec 11 15:53:42 | sdc: path_checker = rdac (setting: storage device
autodetected)
Dec 11 15:53:42 | sdc: checker timeout = 30 s (setting: kernel sysfs)
Dec 11 15:53:42 | sdc: rdac state = up
Dec 11 15:53:42 | sdc: uid_attribute = ID_SERIAL (setting: multipath.conf
defaults/devices section)
Dec 11 15:53:42 | sdc: uid = 361418770003f0d280a6c5df0a959 (udev)
Dec 11 15:53:42 | sdc: detect_prio = yes (setting: multipath internal)
Dec 11 15:53:42 | sdc: prio = rdac (setting: storage device configuration)
Dec 11 15:53:42 | sdc: prio args = "" (setting: storage device
configuration)
Dec 11 15:53:42 | sdc: rdac prio = 11
Dec 11 15:53:42 | loop0: blacklisted, udev property missing
Dec 11 15:53:42 | loop1: blacklisted, udev property missing
Dec 11 15:53:42 | loop2: blacklisted, udev property missing
Dec 11 15:53:42 | loop3: blacklisted, udev property missing
Dec 11 15:53:42 | loop4: blacklisted, udev property missing
Dec 11 15:53:42 | loop5: blacklisted, udev property missing
Dec 11 15:53:42 | loop6: blacklisted, udev property missing
Dec 11 15:53:42 | loop7: blacklisted, udev property missing
Dec 11 15:53:42 | dm-0: blacklisted, udev property missing
Dec 11 15:53:42 | dm-1: blacklisted, udev property missing
= paths list =
uuid  hcil dev dev_t pri dm_st chk_st
vend/pro
361418770003f0d280a6c5df0a959 15:0:0:0 sdb 8:16  11  undef undef
 DELL,MD3
361418770003f0d280a6c5df0a959 16:0:0:0 sdc 8:32  11  undef undef
 DELL,MD3
Dec 11 15:53:42 | libdevmapper version 1.02.155 (2018-12-18)
Dec 11 15:53:42 | DM multipath kernel driver v1.13.0
Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted
Dec 11 15:53:42 | sdb: wwid 361418770003f0d280a6c5df0a959 whitelisted
Dec 11 15:53:42 | wwid 361418770003f0d280a6c5df0a959 not in wwids file,
skipping sdb
Dec 11 15:53:42 | sdb: orphan path, only one path
Dec 11 15:53:42 | rdac prioritizer refcount 2
Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted
Dec 11 15:53:42 | sdc: wwid 361418770003f0d280a6c5df0a959 whitelisted
Dec 11 15:53:42 | wwid 361418770003f0d280a6c5df0a959 not in wwids file,
skipping sdc
Dec 11 15:53:42 | sdc: orphan path, only one path
Dec 11 15:53:42 | rdac prioritizer refcount 1
Dec 11 15:53:42 | unloading rdac prioritizer
Dec 11 15:53:42 | unloading const prioritizer
Dec 11 15:53:42 | unloading rdac checker
Dec 11 15:53:42 | unloading tur checker

I will appreciated any help.

Thanks




---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage iSCSI and multipath

2019-12-06 Thread Gilberto Nunes
Hi there

I got it! I need to configure LVM on top of iSCSI in order to use LVM
shared storage... RTFM! =D

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 5 de dez. de 2019 às 19:02, Martin Holub via pve-user <
pve-user@pve.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: Martin Holub 
> To: pve-user@pve.proxmox.com
> Cc:
> Bcc:
> Date: Thu, 5 Dec 2019 23:01:49 +0100
> Subject: Re: [PVE-User] Storage iSCSI and multipath
> Am 05.12.19 um 20:53 schrieb Gilberto Nunes:
>
> > Hi there
> >
> > I have some doubt regarding multipath iscsi
> > I have set up the multipath and it's ok...
> > But my doubt is how can I point the storage in proxmox to see both IP to
> > connect to iscsi storage?
> > I tried this
> > iscsi: VM-DATA
> >  portal 192.168.130.102
> >  portal 192.168.131.102
> >  target
> >
> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d285bbca139OK
> >  content images
> >
> > but when I list the storage
> > pvesm list VM-DATA
> > got this error
> >
> > file /etc/pve/storage.cfg line 7 (section 'VM-DATA') - unable to parse
> > value of 'portal': duplicate attribute
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> Hi,
>
> If i haven't missed it, there is still no multipath Support within
> Proxmox? You will need to use something like multipath tools on debian,
> create a Blockdevice and then use with LVM or whatever. There is an
> Article in the Wiki covering the Setup:
> https://pve.proxmox.com/wiki/ISCSI_Multipath
>
> Best
> Martin
>
>
>
>
> -- Forwarded message --
> From: Martin Holub via pve-user 
> To: pve-user@pve.proxmox.com
> Cc: Martin Holub 
> Bcc:
> Date: Thu, 5 Dec 2019 23:01:49 +0100
> Subject: Re: [PVE-User] Storage iSCSI and multipath
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Storage iSCSI and multipath

2019-12-05 Thread Gilberto Nunes
Hi there

I have some doubt regarding multipath iscsi
I have set up the multipath and it's ok...
But my doubt is how can I point the storage in proxmox to see both IP to
connect to iscsi storage?
I tried this
iscsi: VM-DATA
portal 192.168.130.102
portal 192.168.131.102
target
iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d285bbca139OK
content images

but when I list the storage
pvesm list VM-DATA
got this error

file /etc/pve/storage.cfg line 7 (section 'VM-DATA') - unable to parse
value of 'portal': duplicate attribute
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Migrate direct from Xen to iSCSI LUN

2019-12-03 Thread Gilberto Nunes
Hi there.
I'm looking for some insight about how to migrate VM from Citrix direct to
iSCSI LUN. I have a Dell Storage MD3200i so the only way is to work direct
with LUNs... Or perhaps someone can point me in the right direction. I
suppose I can use Clonezilla to migrate on-the-fly to a VM which have it's
disk lay in iSCSI LUN Any advice would be nice?
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS rpool grub rescue boot...

2019-11-27 Thread Gilberto Nunes
Question: I wonder if could I use hpssacli to update firmware update.
Perhaps this can works, don't you agree?
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 27 de nov. de 2019 às 11:43, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi... yes! That's great, but unfortunately I do not have access to update
> the firmware... It's a cloud dedicated server!
> But I am glad to know that is possible to change the raid mode...
>
> Thanks a lot
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua., 27 de nov. de 2019 às 11:20, Tim Duelken 
> escreveu:
>
>> Hi Gilberto,
>>
>> > I just installed Proxmox 6 in an HPE server, which has HP Smart Array
>> P420i
>> > and, unfortunately, this not so Smart Array doesn't give me the options
>> to
>> > make non-raid or HBA/IT Mode...
>> > The Proxmox installer ran smoothly, but when try to boot, get this error
>> >
>> > unknow device
>> >
>> > Somebody can help with this issue??
>>
>> We use this controller with ZFS. You can put it into HBA-Mode. See here:
>> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/
>> <
>> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/
>> >
>>
>> BUT - you can’t boot from any disk this controller is attached to.
>>
>> br
>> Tim
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS rpool grub rescue boot...

2019-11-27 Thread Gilberto Nunes
Hi... yes! That's great, but unfortunately I do not have access to update
the firmware... It's a cloud dedicated server!
But I am glad to know that is possible to change the raid mode...

Thanks a lot
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 27 de nov. de 2019 às 11:20, Tim Duelken  escreveu:

> Hi Gilberto,
>
> > I just installed Proxmox 6 in an HPE server, which has HP Smart Array
> P420i
> > and, unfortunately, this not so Smart Array doesn't give me the options
> to
> > make non-raid or HBA/IT Mode...
> > The Proxmox installer ran smoothly, but when try to boot, get this error
> >
> > unknow device
> >
> > Somebody can help with this issue??
>
> We use this controller with ZFS. You can put it into HBA-Mode. See here:
> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/
> <
> https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/
> >
>
> BUT - you can’t boot from any disk this controller is attached to.
>
> br
> Tim
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS rpool grub rescue boot...

2019-11-27 Thread Gilberto Nunes
That's nice Marco! Thanks for reply! Cheers.
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 27 de nov. de 2019 às 10:47, Marco Gaiarin 
escreveu:

> Mandi! Gilberto Nunes
>   In chel di` si favelave...
>
> > I just installed Proxmox 6 in an HPE server, which has HP Smart Array
> P420i
> > and, unfortunately, this not so Smart Array doesn't give me the options
> to
> > make non-raid or HBA/IT Mode...
> > The Proxmox installer ran smoothly, but when try to boot, get this error
>
> I use that controller, but not with ZFS.
>
> Anyway seems that IS possible tu put the controller in HBA mode, see:
>
> https://www.youtube.com/watch?v=JuaezJd4C3I
>
> Probably you have the same effect:
>
> a) upgrading to the latest bios
>
> b) using hpssacli from a temp-installed linux distro, eg a USB key or
>  an usb disk.
>
> --
> dott. Marco Gaiarin GNUPG Key ID:
> 240A3D66
>   Associazione ``La Nostra Famiglia''
> http://www.lanostrafamiglia.it/
>   Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento
> (PN)
>   marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f
> +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
>   http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ZFS rpool grub rescue boot...

2019-11-27 Thread Gilberto Nunes
Hi there

I just installed Proxmox 6 in an HPE server, which has HP Smart Array P420i
and, unfortunately, this not so Smart Array doesn't give me the options to
make non-raid or HBA/IT Mode...
The Proxmox installer ran smoothly, but when try to boot, get this error

unknow device

Somebody can help with this issue??

Thanks a lot
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] vm no booting

2019-11-08 Thread Gilberto Nunes
any logs?
dmesg
syslog
???
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex., 8 de nov. de 2019 às 11:35, Karel Gonzalez Herrera via pve-user <
pve-user@pve.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: Karel Gonzalez Herrera 
> To: pve-user@pve.proxmox.com
> Cc:
> Bcc:
> Date: Fri, 8 Nov 2019 09:34:43 -0500
> Subject: vm no booting
> after a backup of one vm in proxmox 6.0 the vm stops booting any ideas
> sld
>
>
>
>
>
> Ing. Karel González Herrera
> Administrador de Red
> Etecsa: Dirección Territorial Norte
> e-mail: karel.gonza...@etecsa.cu
> Tel: 8344973 8607483
> Mov: 52182690
>
>
>
>
>
> -- Forwarded message --
> From: Karel Gonzalez Herrera via pve-user 
> To: pve-user@pve.proxmox.com
> Cc: Karel Gonzalez Herrera 
> Bcc:
> Date: Fri, 8 Nov 2019 09:34:43 -0500
> Subject: [PVE-User] vm no booting
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Add SSD journal to OSD's

2019-10-30 Thread Gilberto Nunes
Hi there

I have cluster w/ 5 pve ceph servers.
One of this servers lost the entire journal SSD device and I need to add a
new one.
I do not have the old ssd anymore..
So, bellow is the steps I intend to follow... Just need that you guys let
me know if this steps are correct or not and if you could, add some advices.

1 - Set noout
2 - Destroy the OSD that already down
3 - Recreate the OSD on the servers that SSD failed

This steps will do the ceph to rebalance the data, right?
Can I lose data?
My cluster right now, have some inconsistence:
ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 21.181 is active+clean+inconsistent, acting [18,2,6]

Is there a problem to do the steps above with the cluster in this state?
This PG is set on other OSD... OSD.18 which is laying in other server...

Thanks a lot for any help


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Strange behavior vzdump

2019-10-23 Thread Gilberto Nunes
I am thing my problems comes after install

https://github.com/ayufan/pve-patches

I remove it but get this problem...

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 22 de out de 2019 às 10:44, Marco Gaiarin  escreveu:

> Mandi! Gilberto Nunes
>   In chel di` si favelave...
>
> > I have notice that vzdump options, maxfiles doesn't work properly.
> > I set --maxfiles to 10, but still it's hold old files...
> > For now, I add --remove 1, to the /etc/vzdump.conf, but, according to the
> > vzdump man page, default --remove set is 1, i.e., enable!
> > Why vzdump do not remove old backup, just when set maxfiles??
> > Or even worst, if --remove 1 is the default options, why vzdump doesn't
> > work??
> > Proxmox VE version 5.4
>
> This make some noise on my ear... two clusters, one with
> ''traditional'' iSCSI SAN storage, one with Ceph.
>
> On Ceph one:
>
>  root@hulk:~# ls /srv/pve/dump/ | grep \.lzo | cut -d '-' -f 1-3 | sort |
> uniq -c
>   1 vzdump-lxc-103
>   1 vzdump-lxc-105
>   1 vzdump-lxc-106
>   1 vzdump-lxc-109
>   3 vzdump-lxc-111
>  50 vzdump-lxc-114
>  49 vzdump-lxc-117
>   1 vzdump-qemu-104
>   3 vzdump-qemu-108
>   3 vzdump-qemu-113
>   1 vzdump-qemu-115
>  49 vzdump-qemu-116
>
> My backup stategy is:
>
>  + for some VM/LXC, daily backup (114, 116, 117 are 'daily') all day's week
>apart saturday.
>
>  + for all VM/LXC, on saturday bacula pre-script that run the backup, and
> then
>bacula put on tape.
>
> the bacula 'pre' script do:
>
> /usr/bin/vzdump 117 -storage Backup -maxfiles 1 -remove -compress
> lzo -mode suspend -quiet -mailto c...@sv.lnf.it -mailnotification failure
>
> for every LXC/VM, and as you can see, delete old backup only for some
> VM/LXC, not all.
>
> 'backup' storage is defined as:
>
>  nfs: Backup
> export /srv/pve
> path /mnt/pve/Backup
> server 10.27.251.11
> content vztmpl,images,iso,backup,rootdir
> maxfiles 0
> options vers=3,soft,intr
>
> Clearly, no error in logs.
>
> --
> dott. Marco Gaiarin GNUPG Key ID:
> 240A3D66
>   Associazione ``La Nostra Famiglia''
> http://www.lanostrafamiglia.it/
>   Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento
> (PN)
>   marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f
> +39-0434-842797
>
> Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
>   http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
> (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VMID clarifying

2019-10-22 Thread Gilberto Nunes
or better, start with VMID 1, then +n

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 22 de out de 2019 às 11:05, lists  escreveu:

> Hi,
>
> Actually, we feel the same as Gilberto.
>
> Could proxmox not for example default to something like: highest
> currently-in-use-number PLUS 1?
>
> MJ
>
> On 22-10-2019 15:28, Fabian Grünbichler wrote:
> > On October 22, 2019 2:43 pm, Gilberto Nunes wrote:
> >> Folks,
> >> When you create a VM, it generates an ID, for example 100, 101, 102 ...
> etc
> >
> > no. when you create a VM in the GUI, it suggests the first free slot in
> > the guest ID range. you can choose whatever you want ;)
> >
> >> ...
> >> By removing this VM 101 let's say, and then creating a new one, I
> noticed
> >> that it generates this new one with ID 101 again.
> >
> > only if you don't set another ID.
> >
> >> But I also realized that it takes the backups I had from the old VM 101
> and
> >> links to this new one, even though it's OSs and everything else. My
> fear is
> >> that running a backup routine will overwrite the images I had with that
> ID.
> >> It would probably happen if I had not realized.
> >
> > the solution is to either not delete VMs that are still important (even
> > if 'important' just means having the associated backups semi-protected)
> >
> >> Any way to use sequential ID and not go back in IDs? I do not know if i
> was
> >> clear
> >
> > or don't use the default VMID suggestion by the GUI. there is no way to
> > change that behaviour, since we don't have a record of "IDs that (might)
> > have been used at some point in the past"
> >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Strange behavior vzdump

2019-10-22 Thread Gilberto Nunes
Hi there

I have notice that vzdump options, maxfiles doesn't work properly.
I set --maxfiles to 10, but still it's hold old files...
For now, I add --remove 1, to the /etc/vzdump.conf, but, according to the
vzdump man page, default --remove set is 1, i.e., enable!
Why vzdump do not remove old backup, just when set maxfiles??
Or even worst, if --remove 1 is the default options, why vzdump doesn't
work??
Proxmox VE version 5.4

Thanks


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VMID clarifying

2019-10-22 Thread Gilberto Nunes
Folks,
When you create a VM, it generates an ID, for example 100, 101, 102 ... etc
...
By removing this VM 101 let's say, and then creating a new one, I noticed
that it generates this new one with ID 101 again.

But I also realized that it takes the backups I had from the old VM 101 and
links to this new one, even though it's OSs and everything else. My fear is
that running a backup routine will overwrite the images I had with that ID.
It would probably happen if I had not realized.

Any way to use sequential ID and not go back in IDs? I do not know if i was
clear

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Face problem with cluster: CS_ERR_BAD_HANDLE

2019-10-03 Thread Gilberto Nunes
Hi there

I have a 2 node cluster, that was work fine!
Then when I add the second node, I get this error:

CS_ERR_BAD_HANDLE (or something similar!)

when try to use pvecm status

I try to remove the node, delete the cluster, but nothing good!
My solution was reinstall everything again
But there's something more I could do in order to recover the cluster??

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel 5.3 and Proxmox Ceph nodes

2019-09-16 Thread Gilberto Nunes
Oh! I sorry! I didn't  sent the link which I referred to

https://www.phoronix.com/scan.php?page=news_item&px=Ceph-Linux-5.3-Changes

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em seg, 16 de set de 2019 às 05:50, Ronny Aasen 
escreveu:

> On 16.09.2019 03:17, Gilberto Nunes wrote:
> > Hi there
> >
> > I read this about kernel 5.3 and ceph, and I am curious...
> > I have a 6 nodes proxmox ceph cluster with luminous...
> > Should be a good idea to user kernel 5.3 from here:
> >
> > https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.3/
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> you read "this" ?
> This what exactly?
>
>
> generally unless you have a problem you need fixed i would run the
> kernels from proxmox.
>
> Ronny
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Kernel 5.3 and Proxmox Ceph nodes

2019-09-15 Thread Gilberto Nunes
Hi there

I read this about kernel 5.3 and ceph, and I am curious...
I have a 6 nodes proxmox ceph cluster with luminous...
Should be a good idea to user kernel 5.3 from here:

https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.3/
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Connect PVE Box to 2 iscsi server...

2019-08-22 Thread Gilberto Nunes
Hi Ronny

I'll try it then report if here!

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em sex, 16 de ago de 2019 às 17:53, Ronny Aasen
 escreveu:
>
> On 16.08.2019 18:06, Gilberto Nunes wrote:
> > Hi there
> >
> > Here I have two iscsi servers, that work together in order to provide
> > a single connection for a certain initiator.
> > Well at least to a Windows box when using MS iSCSI initiator.
> > In such OS, I was able to connect using both iSCSI servers, and in
> > Windows storage manager, I see just one HDD.
> > So when I shutdown iscsi serverA, the HDD remain up and running, in Windows 
> > BOX.
> > However, when I try to do that in PVE Box, I was enable to active
> > multipath and I see both HDD from iSCSI servers, /dev/sdb and
> > /dev/sdc.
> > But I could not figure out how to make PVE see both HDD in storace.cfg
> > in a single one storage, just like Windows box do!
> > What do I missing??
>
> Hello
>
> I assume you need to use multipathd. I have something similar using
> Fiberchannel disks, but the method should be similar.
>
> if it is not installed, you need to install  multipath-tools using apt.
> check that multipathd is running.
>
> run "multipath -v2" it should scan and create the multipath device.
>
> with multipath -ll and dmsetup ls --tree  ; you should now see the multiple 
> disks come together as a single device.
>
> # example
> # multipath -ll
> 360example0292e42000b dm-2 FUJITSU,ETERNUS_DXL
> size=5.0T features='2 queue_if_no_path retain_attached_hw_handler' 
> hwhandler='0' wp=rw
> |-+- policy='service-time 0' prio=50 status=active
> | |- 0:0:1:0 sdc 8:32 active ready running
> | `- 2:0:1:0 sde 8:64 active ready running
> `-+- policy='service-time 0' prio=10 status=enabled
>|- 0:0:0:0 sdb 8:16 active ready running
>`- 2:0:0:0 sdd 8:48 active ready running
>
> verify you see the device under
> /dev/mapper/360example0292e42000b
>
>
> use the device as you see fit. We use it as a shared lvm VG between multiple 
> nodes for HA and failover.
> But you can also format as your filesystem of choise and mount it as a 
> directory, or run a cluster filesystem like ocfs or gfs and mount it as a 
> shared directory.
>
> good luck
> Ronny
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Connect PVE Box to 2 iscsi server...

2019-08-16 Thread Gilberto Nunes
Hi there

Here I have two iscsi servers, that work together in order to provide
a single connection for a certain initiator.
Well at least to a Windows box when using MS iSCSI initiator.
In such OS, I was able to connect using both iSCSI servers, and in
Windows storage manager, I see just one HDD.
So when I shutdown iscsi serverA, the HDD remain up and running, in Windows BOX.
However, when I try to do that in PVE Box, I was enable to active
multipath and I see both HDD from iSCSI servers, /dev/sdb and
/dev/sdc.
But I could not figure out how to make PVE see both HDD in storace.cfg
in a single one storage, just like Windows box do!
What do I missing??


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE-6 and iscsi server

2019-08-16 Thread Gilberto Nunes
more trouble:
attempted to create a VG

pve01:~# vgcreate iscsi /dev/mapper/mpathb
  Volume group "iscsi" successfully created
pve01:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   3   0 wz--n- 19.50g 2.38g
pve01:~# pvs
  PV VGFmt  Attr PSize  PFree
  /dev/mapper/mpathb iscsi lvm2 a--   5.00g 5.00g
  /dev/sda3  pve   lvm2 a--  19.50g 2.38g
pve01:~# pvremove /dev/mapper/mpathb
  No PV found on device /dev/mapper/mpathb.
pve01:~# ls /dev/mapper/
control  mpathb  pve-data  pve-data_tdata  pve-data_tmeta  pve-root  pve-swap
pve01:~# /etc/init.d/multipath-tools restart
[ ok ] Restarting multipath-tools (via systemctl): multipath-tools.service.
pve01:~# pvs
  PV VGFmt  Attr PSize  PFree
  /dev/mapper/mpathb iscsi lvm2 a--   5.00g 5.00g
  /dev/sda3  pve   lvm2 a--  19.50g 2.38g
pve01:~# lvs
  LV   VG  Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 8.00g 0.00   1.58
  root pve -wi-ao 4.75g
  swap pve -wi-ao 2.38g
pve01:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   3   0 wz--n- 19.50g 2.38g


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36


Em sex, 16 de ago de 2019 às 08:56, Gilberto Nunes
 escreveu:
>
> But still, I am not able to use 2 iscsi servers...
> In windows XP or 7 I could use iSCSI Initiator to discover both
> portals... And I could see one disck came from both iscsi server!
> Proxmox apparently do not support this in Storage WEB GUI!
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
> Em sex, 16 de ago de 2019 às 08:40, Gilberto Nunes
>  escreveu:
> >
> > Hi there! It's me again!
> >
> > I do all same steps in Proxmox VE 5.4 and everything works as expected!
> > Is this some bug with Proxmox VE 6?
> >
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> > Em qui, 15 de ago de 2019 às 15:33, Gilberto Nunes
> >  escreveu:
> > >
> > > More info
> > >
> > > ve01:~# iscsiadm -m discovery -t st -p 10.10.10.100
> > > 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > > pve01:~# iscsiadm -m discovery -t st -p 10.10.10.110
> > > 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > > pve01:~# iscsiadm -m node --login
> > > iscsiadm: default: 1 session requested, but 1 already present.
> > > iscsiadm: default: 1 session requested, but 1 already present.
> > > iscsiadm: Could not log into all portals
> > > pve01:~# iscsiadm -m node --logout
> > > Logging out of session [sid: 5, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.100,3260]
> > > Logging out of session [sid: 6, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.110,3260]
> > > Logout of [sid: 5, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.100,3260] successful.
> > > Logout of [sid: 6, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.110,3260] successful.
> > > pve01:~# iscsiadm -m node --login
> > > Logging in to [iface: default, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.100,3260] (multiple)
> > > Logging in to [iface: default, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.110,3260] (multiple)
> > > Login to [iface: default, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.100,3260] successful.
> > > Login to [iface: default, target:
> > > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > > 10.10.10.110,3260] successful.
> > > pve01:~# iscsiadm -m node
> > > 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > > 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > > pve01:~# iscsiadm -m session -P 1
> > > Target: iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br (non-flash)
> > > Current Portal: 10.10.10.100:3260,1
> > > Persistent Portal: 10.10.10.100:3260,1
> > > **
> > > Interface:
> > > **
> > > Iface Name: default
> > > Iface Transport: tcp

Re: [PVE-User] PVE-6 and iscsi server

2019-08-16 Thread Gilberto Nunes
But still, I am not able to use 2 iscsi servers...
In windows XP or 7 I could use iSCSI Initiator to discover both
portals... And I could see one disck came from both iscsi server!
Proxmox apparently do not support this in Storage WEB GUI!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36



Em sex, 16 de ago de 2019 às 08:40, Gilberto Nunes
 escreveu:
>
> Hi there! It's me again!
>
> I do all same steps in Proxmox VE 5.4 and everything works as expected!
> Is this some bug with Proxmox VE 6?
>
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
> Em qui, 15 de ago de 2019 às 15:33, Gilberto Nunes
>  escreveu:
> >
> > More info
> >
> > ve01:~# iscsiadm -m discovery -t st -p 10.10.10.100
> > 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > pve01:~# iscsiadm -m discovery -t st -p 10.10.10.110
> > 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > pve01:~# iscsiadm -m node --login
> > iscsiadm: default: 1 session requested, but 1 already present.
> > iscsiadm: default: 1 session requested, but 1 already present.
> > iscsiadm: Could not log into all portals
> > pve01:~# iscsiadm -m node --logout
> > Logging out of session [sid: 5, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.100,3260]
> > Logging out of session [sid: 6, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.110,3260]
> > Logout of [sid: 5, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.100,3260] successful.
> > Logout of [sid: 6, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.110,3260] successful.
> > pve01:~# iscsiadm -m node --login
> > Logging in to [iface: default, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.100,3260] (multiple)
> > Logging in to [iface: default, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.110,3260] (multiple)
> > Login to [iface: default, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.100,3260] successful.
> > Login to [iface: default, target:
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> > 10.10.10.110,3260] successful.
> > pve01:~# iscsiadm -m node
> > 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > pve01:~# iscsiadm -m session -P 1
> > Target: iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br (non-flash)
> > Current Portal: 10.10.10.100:3260,1
> > Persistent Portal: 10.10.10.100:3260,1
> > **
> > Interface:
> > **
> > Iface Name: default
> > Iface Transport: tcp
> > Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> > Iface IPaddress: 10.10.10.200
> > Iface HWaddress: 
> > Iface Netdev: 
> > SID: 7
> > iSCSI Connection State: LOGGED IN
> > iSCSI Session State: LOGGED_IN
> > Internal iscsid Session State: NO CHANGE
> > Current Portal: 10.10.10.110:3260,1
> > Persistent Portal: 10.10.10.110:3260,1
> > **
> > Interface:
> > **
> > Iface Name: default
> > Iface Transport: tcp
> > Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> > Iface IPaddress: 10.10.10.200
> > Iface HWaddress: 
> > Iface Netdev: 
> > SID: 8
> > iSCSI Connection State: LOGGED IN
> > iSCSI Session State: LOGGED_IN
> > Internal iscsid Session State: NO CHANGE
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> > Em qui, 15 de ago de 2019 às 15:28, Gilberto Nunes
> >  escreveu:
> > >
> > > Hi there...
> > >
> > > I have two iSCSI server, which works with zfs replication and tgt...
> > > (OviOS Linux, to be precise).
> > >
> > > In Windows BOX, using the MS iSCSI initiator, I am able to set 2
> > > different IP addresses in order to get a redundancy, but how can I
> > > achieve that in Proxmox?
> > > In /etc/pve/storage.cfg I cannot use two different IP addresses.
> > > So, I make my way trying to use multipath.
> > > Both disks from both iSCSI servers 

Re: [PVE-User] PVE-6 and iscsi server

2019-08-16 Thread Gilberto Nunes
Hi there! It's me again!

I do all same steps in Proxmox VE 5.4 and everything works as expected!
Is this some bug with Proxmox VE 6?



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em qui, 15 de ago de 2019 às 15:33, Gilberto Nunes
 escreveu:
>
> More info
>
> ve01:~# iscsiadm -m discovery -t st -p 10.10.10.100
> 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m discovery -t st -p 10.10.10.110
> 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m node --login
> iscsiadm: default: 1 session requested, but 1 already present.
> iscsiadm: default: 1 session requested, but 1 already present.
> iscsiadm: Could not log into all portals
> pve01:~# iscsiadm -m node --logout
> Logging out of session [sid: 5, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260]
> Logging out of session [sid: 6, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260]
> Logout of [sid: 5, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] successful.
> Logout of [sid: 6, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] successful.
> pve01:~# iscsiadm -m node --login
> Logging in to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] (multiple)
> Logging in to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] (multiple)
> Login to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] successful.
> Login to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] successful.
> pve01:~# iscsiadm -m node
> 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m session -P 1
> Target: iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br (non-flash)
> Current Portal: 10.10.10.100:3260,1
> Persistent Portal: 10.10.10.100:3260,1
> **
> Interface:
> **
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> Iface IPaddress: 10.10.10.200
> Iface HWaddress: 
> Iface Netdev: 
> SID: 7
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
> Current Portal: 10.10.10.110:3260,1
> Persistent Portal: 10.10.10.110:3260,1
> **
> Interface:
> **
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> Iface IPaddress: 10.10.10.200
> Iface HWaddress: 
> Iface Netdev: 
> SID: 8
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
> Em qui, 15 de ago de 2019 às 15:28, Gilberto Nunes
>  escreveu:
> >
> > Hi there...
> >
> > I have two iSCSI server, which works with zfs replication and tgt...
> > (OviOS Linux, to be precise).
> >
> > In Windows BOX, using the MS iSCSI initiator, I am able to set 2
> > different IP addresses in order to get a redundancy, but how can I
> > achieve that in Proxmox?
> > In /etc/pve/storage.cfg I cannot use two different IP addresses.
> > So, I make my way trying to use multipath.
> > Both disks from both iSCSI servers appears in PVE box, as /dev/sdc and 
> > /dev/sdd.
> > Here the multipath.conf file:
> >
> > defaults {
> > user_friendly_namesyes
> > polling_interval2
> > path_selector   "round-robin 0"
> > path_grouping_policymultibus
> > path_checkerreadsector0
> > rr_min_io   100
> > failbackimmediate
> > no_path_retry   queue
> > }
> > blacklist {
> > wwid .*
> > }
> > blacklist_exceptions {
> > wwid "360010001"
> > property "(ID_SCSI_VPD|ID_WWN|ID_SERIAL)"
> > }
> > multipaths {
> >   multipath {
> > wwid "360010001"
> > alias mylun
> >   }
> > }
> >
> > The wwid I get from:
> > /lib/u

Re: [PVE-User] PVE-6 and iscsi server

2019-08-15 Thread Gilberto Nunes
More info

ve01:~# iscsiadm -m discovery -t st -p 10.10.10.100
10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
pve01:~# iscsiadm -m discovery -t st -p 10.10.10.110
10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
pve01:~# iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals
pve01:~# iscsiadm -m node --logout
Logging out of session [sid: 5, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.100,3260]
Logging out of session [sid: 6, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.110,3260]
Logout of [sid: 5, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.100,3260] successful.
Logout of [sid: 6, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.110,3260] successful.
pve01:~# iscsiadm -m node --login
Logging in to [iface: default, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.100,3260] (multiple)
Logging in to [iface: default, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.110,3260] (multiple)
Login to [iface: default, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.100,3260] successful.
Login to [iface: default, target:
iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
10.10.10.110,3260] successful.
pve01:~# iscsiadm -m node
10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
pve01:~# iscsiadm -m session -P 1
Target: iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br (non-flash)
Current Portal: 10.10.10.100:3260,1
Persistent Portal: 10.10.10.100:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
Iface IPaddress: 10.10.10.200
Iface HWaddress: 
Iface Netdev: 
SID: 7
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 10.10.10.110:3260,1
Persistent Portal: 10.10.10.110:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
Iface IPaddress: 10.10.10.200
Iface HWaddress: 
Iface Netdev: 
SID: 8
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36



Em qui, 15 de ago de 2019 às 15:28, Gilberto Nunes
 escreveu:
>
> Hi there...
>
> I have two iSCSI server, which works with zfs replication and tgt...
> (OviOS Linux, to be precise).
>
> In Windows BOX, using the MS iSCSI initiator, I am able to set 2
> different IP addresses in order to get a redundancy, but how can I
> achieve that in Proxmox?
> In /etc/pve/storage.cfg I cannot use two different IP addresses.
> So, I make my way trying to use multipath.
> Both disks from both iSCSI servers appears in PVE box, as /dev/sdc and 
> /dev/sdd.
> Here the multipath.conf file:
>
> defaults {
> user_friendly_namesyes
> polling_interval2
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkerreadsector0
> rr_min_io   100
> failbackimmediate
> no_path_retry   queue
> }
> blacklist {
> wwid .*
> }
> blacklist_exceptions {
> wwid "360010001"
> property "(ID_SCSI_VPD|ID_WWN|ID_SERIAL)"
> }
> multipaths {
>   multipath {
> wwid "360010001"
> alias mylun
>   }
> }
>
> The wwid I get from:
> /lib/udev/scsi_id -g -u -d /dev/sdc
> /lib/udev/scsi_id -g -u -d /dev/sdd
>
> Command multipath -v3 show this:
> Aug 15 15:25:30 | set open fds limit to 1048576/1048576
> Aug 15 15:25:30 | loading //lib/multipath/libchecktur.so checker
> Aug 15 15:25:30 | checker tur: message table size = 3
> Aug 15 15:25:30 | loading //lib/multipath/libprioconst.so prioritizer
> Aug 15 15:25:30 | foreign library "nvme" loaded successfully
> Aug 15 15:25:30 | sda: udev property ID_SERIAL whitelisted
> Aug 15 15:25:30 | sda: mask = 0x1f
> Aug 15 15:25:30 | sda: dev_t = 8:0
> Aug 15 15:25:30 | sda: size = 41943040
> Aug 15 15:25:30 | sda: vendor = QEMU
> Aug 15 15:25:30 | sda: product = QEMU HARDDISK
> Aug 15 15:25:30 | sda: rev = 2.5+
> Aug 15 15:25:30 | sda: h:b:t:l = 0:0:1:0
> Aug 15 15:25:30 | sda: tgt_node_name =
> Aug 15 15:25:30 | sda: path state = running
> Aug 15 15:25:30 | 

[PVE-User] PVE-6 and iscsi server

2019-08-15 Thread Gilberto Nunes
STOR
size=64G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
  |- 5:0:0:0 sdb 8:16 active ready running
  `- 6:0:0:0 sdc 8:32 active ready running

But instead, I get nothing at all!

Where am I mistaking here??

Thanks a lot


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Reinstall Proxmox with Ceph storage

2019-08-06 Thread Gilberto Nunes
Sure! I will not!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em ter, 6 de ago de 2019 às 10:02, Alwin Antreich via pve-user
 escreveu:
>
>
>
>
> -- Forwarded message --
> From: Alwin Antreich 
> To: PVE User List 
> Cc:
> Bcc:
> Date: Tue, 06 Aug 2019 15:02:00 +0200
> Subject: Re: [PVE-User] Reinstall Proxmox with Ceph storage
> On August 6, 2019 2:46:21 PM GMT+02:00, Gilberto Nunes 
>  wrote:
> >WOW! This is it??? Geez! So simple Thanks a lot
> >---
> >Gilberto Nunes Ferreira
> >
> >(47) 3025-5907
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >Skype: gilberto.nunes36
> >
> >
> >
> >
> >Em ter, 6 de ago de 2019 às 06:48, Alwin Antreich
> > escreveu:
> >>
> >> Hello Gilberto,
> >>
> >> On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote:
> >> > Hi there...
> >> >
> >> > Today we have 3 servers work on Cluster HA and Ceph.
> >> > Proxmox all nodes is 5.4
> >> > We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH
> >storage.
> >> > So, we like to reinstall each node in an HDD SSD 120GB in order to
> >you
> >> > the third SAS into SAS CEPH POOL.
> >> > We get 2 POOL's:
> >> > SAS - which content 2 HDD SAS
> >> > SATA - which content 3 HDD SATA
> >> >
> >> > In general we need move the disk image in SAS POOL to SATA POOL?
> >> > Or there any other advice in how to proceed in this case??
> >> As you have 3x nodes, you can simply do it one node at a time.
> >Assuming
> >> you are using a size 3 / min_size 2 for your Ceph pools. No need to
> >move
> >> any image.
> >>
> >> Ceph OSDs are portable, meaning, if you configure the newly installed
> >> node to be connected (and configured) to the same Ceph cluster, the
> >OSDs
> >> should just pop-in again.
> >>
> >> First deactivate HA on all nodes. Then you could try to clone the OS
> >> disk to the SSD (eg. clonezilla, dd). Or remove the node from the
> >> cluster (not from Ceph) and re-install it from scratch. Later on, the
> >> old SAS disk can be reused as additional OSD.
> >>
> >> --
> >> Cheers,
> >> Alwin
> >>
> >> ___
> >> pve-user mailing list
> >> pve-user@pve.proxmox.com
> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >___
> >pve-user mailing list
> >pve-user@pve.proxmox.com
> >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> But don't forget to let the Ceph cluster heal first, before you start the 
> next. ;)
>
>
>
> -- Forwarded message --
> From: Alwin Antreich via pve-user 
> To: PVE User List 
> Cc: Alwin Antreich 
> Bcc:
> Date: Tue, 06 Aug 2019 15:02:00 +0200
> Subject: Re: [PVE-User] Reinstall Proxmox with Ceph storage
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Reinstall Proxmox with Ceph storage

2019-08-06 Thread Gilberto Nunes
WOW! This is it??? Geez! So simple Thanks a lot
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em ter, 6 de ago de 2019 às 06:48, Alwin Antreich
 escreveu:
>
> Hello Gilberto,
>
> On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote:
> > Hi there...
> >
> > Today we have 3 servers work on Cluster HA and Ceph.
> > Proxmox all nodes is 5.4
> > We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH storage.
> > So, we like to reinstall each node in an HDD SSD 120GB in order to you
> > the third SAS into SAS CEPH POOL.
> > We get 2 POOL's:
> > SAS - which content 2 HDD SAS
> > SATA - which content 3 HDD SATA
> >
> > In general we need move the disk image in SAS POOL to SATA POOL?
> > Or there any other advice in how to proceed in this case??
> As you have 3x nodes, you can simply do it one node at a time. Assuming
> you are using a size 3 / min_size 2 for your Ceph pools. No need to move
> any image.
>
> Ceph OSDs are portable, meaning, if you configure the newly installed
> node to be connected (and configured) to the same Ceph cluster, the OSDs
> should just pop-in again.
>
> First deactivate HA on all nodes. Then you could try to clone the OS
> disk to the SSD (eg. clonezilla, dd). Or remove the node from the
> cluster (not from Ceph) and re-install it from scratch. Later on, the
> old SAS disk can be reused as additional OSD.
>
> --
> Cheers,
> Alwin
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Reinstall Proxmox with Ceph storage

2019-08-05 Thread Gilberto Nunes
Hi there...

Today we have 3 servers work on Cluster HA and Ceph.
Proxmox all nodes is 5.4
We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH storage.
So, we like to reinstall each node in an HDD SSD 120GB in order to you
the third SAS into SAS CEPH POOL.
We get 2 POOL's:
SAS - which content 2 HDD SAS
SATA - which content 3 HDD SATA

In general we need move the disk image in SAS POOL to SATA POOL?
Or there any other advice in how to proceed in this case??

Thanks for any light in this matter!

Cheers


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage Server Recommendations Please (JR Richardson)

2019-08-03 Thread Gilberto Nunes
Hi there...
Well, I do not have a lot servers... I thinks is 4 or 6...
Anyway, I would suggest to you using hyperconvergence feature with
proxmox, which works very well...

Cheers
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36



Em sáb, 3 de ago de 2019 às 10:03, JR Richardson
 escreveu:
>
> Gilberto,
>
> I have 14 hypervisor nodes hitting 6 storage servers, dell 610's, and a
> couple of backup servers. All VMs, 200 Linux servers and about 20 Win
> servers use NFS shared storage. I'd like to consolidate storage down to two
> servers.
>
> Nada suggested Synology RS818RP+ it is turnkey solution, I'll check into it.
>
> Thanks.
>
> JR
>
>
> how much servers do you have?
> ---
> Gilberto Nunes Ferreira
> >
> > Hi All,
> >
> > I'm thinking about upgrading to 10 Gig storage network for my
> > clusters. Current servers are dell 610 with H700 Raid controller, 2
> > drives in Raid1 for OS and 4 drives in Raid10 for NFS shared storage
> > to the hypervisor nodes. Network is 2xGigabit up links to all servers
> > and storage. Load from the hypervisors is are about 200 Linux servers
> > and about 20 Win servers. I'm not seeing any issue currently in normal
> > operations, but I can see disk IO wait states on storage servers
> > periodically when some of the VMs are booting or tasking storage,
> > creating new VM disks. All storage array disks are currently 15K SAS
> > spinners, no SSDs.
> >
> > I'm considering upgrading to SSD caching server with conventional
> > storage disk back end. Main thing is to get network up to 10Gig.
> > Building servers from scratch is OK but I'd also like to hear about
> > some turnkey options. Storage requirement is currently under 10TB.
> >
> > Any suggestions are welcome.
> >
> > Thanks.
> >
> > JR
> > --
> > JR Richardson
> > Engineering for the Masses
> > Chasing the Azeotrope
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage Server Recommendations Please

2019-08-02 Thread Gilberto Nunes
Could you discribe it better??
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em sex, 2 de ago de 2019 às 20:08, Gilberto Nunes
 escreveu:
>
> how much servers do you have?
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
> Em sex, 2 de ago de 2019 às 17:08, JR Richardson
>  escreveu:
> >
> > Hi All,
> >
> > I'm thinking about upgrading to 10 Gig storage network for my
> > clusters. Current servers are dell 610 with H700 Raid controller, 2
> > drives in Raid1 for OS and 4 drives in Raid10 for NFS shared storage
> > to the hypervisor nodes. Network is 2xGigabit up links to all servers
> > and storage. Load from the hypervisors is are about 200 Linux servers
> > and about 20 Win servers. I'm not seeing any issue currently in normal
> > operations, but I can see disk IO wait states on storage servers
> > periodically when some of the VMs are booting or tasking storage,
> > creating new VM disks. All storage array disks are currently 15K SAS
> > spinners, no SSDs.
> >
> > I'm considering upgrading to SSD caching server with conventional
> > storage disk back end. Main thing is to get network up to 10Gig.
> > Building servers from scratch is OK but I'd also like to hear about
> > some turnkey options. Storage requirement is currently under 10TB.
> >
> > Any suggestions are welcome.
> >
> > Thanks.
> >
> > JR
> > --
> > JR Richardson
> > Engineering for the Masses
> > Chasing the Azeotrope
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Storage Server Recommendations Please

2019-08-02 Thread Gilberto Nunes
how much servers do you have?
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em sex, 2 de ago de 2019 às 17:08, JR Richardson
 escreveu:
>
> Hi All,
>
> I'm thinking about upgrading to 10 Gig storage network for my
> clusters. Current servers are dell 610 with H700 Raid controller, 2
> drives in Raid1 for OS and 4 drives in Raid10 for NFS shared storage
> to the hypervisor nodes. Network is 2xGigabit up links to all servers
> and storage. Load from the hypervisors is are about 200 Linux servers
> and about 20 Win servers. I'm not seeing any issue currently in normal
> operations, but I can see disk IO wait states on storage servers
> periodically when some of the VMs are booting or tasking storage,
> creating new VM disks. All storage array disks are currently 15K SAS
> spinners, no SSDs.
>
> I'm considering upgrading to SSD caching server with conventional
> storage disk back end. Main thing is to get network up to 10Gig.
> Building servers from scratch is OK but I'd also like to hear about
> some turnkey options. Storage requirement is currently under 10TB.
>
> Any suggestions are welcome.
>
> Thanks.
>
> JR
> --
> JR Richardson
> Engineering for the Masses
> Chasing the Azeotrope
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] rebooting vm using curl

2019-08-01 Thread Gilberto Nunes
Instead curl why not use pvesh???
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em qui, 1 de ago de 2019 às 12:50, Diaolin  escreveu:
>
> Il 2019-08-01 16:46 Renato Gallo via pve-user ha scritto:
> > Return-Path: 
> > X-Original-To: pve-user@pve.proxmox.com
> > Delivered-To: pve-user@pve.proxmox.com
> > Received: from posta.myasterweb.org (posta.myasterweb.org
> > [149.202.239.166])
> >   by pve.proxmox.com (Postfix) with ESMTP id 425D33AEE
> >   for ; Thu,  1 Aug 2019 16:46:26 +0200 (CEST)
> > DKIM-Filter: OpenDKIM Filter v2.10.3 posta.myasterweb.org C11971E1740
> > DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aster-lab.com;
> >   s=24F699C6-AD5F-11E9-A0F2-72158F203160; t=1564670785;
> >   bh=auC+u0Puv75XkteeRO4GaJcx+1oyZERzcIc1d7+oeLQ=;
> >   h=Date:From:To:Message-ID:MIME-Version;
> >   b=VIEres5uU9PqOWDgIRGGQKWIo3z7k+lSixxGxZ640QsDByuRpLrju4cWlBhKKK/TD
> >OFzVbNiZlgh3IWCJFxC4UuWoBhq4mQQFXEFvuPVkVjqn0AyXnD3nIzgwoi+RUf858x
> >xcQ02c4yabodjE7Nsg+4rGnafNZxAgl5o07WHR/3e83qGWIlaMkcqpyba+TZB/eX51
> >bS5WdIX/wXFLDViN/Dwd5+MJc1UdnwiaK7189bUCIqYKkaNmussvNd15yWKypeT8DY
> >dpjcwk0r6NQNbdPQxAlDudpHdhlnz/AaGRJ2Ka+PZu+2RW5xLhq2deOXuwseodyOoB
> >bMaj5YlLfTptA==
> > X-Virus-Scanned: amavisd-new at posta.myasterweb.org
> > Received: from posta.myasterweb.org (posta.myasterweb.org
> > [149.202.239.166])
> >   by posta.myasterweb.org (Postfix) with ESMTP id 9BFEC1E1661
> >   for ; Thu,  1 Aug 2019 16:46:25 +0200 (CEST)
> > Date: Thu, 1 Aug 2019 16:46:24 +0200 (CEST)
> > From: Renato Gallo 
> > To: pve-user 
> > Message-ID:
> > <554982042.7675.1564670784954.javamail.zim...@aster-lab.com>
> > In-Reply-To: 
> > References: 
> > Subject: rebooting vm using curl
> > MIME-Version: 1.0
> > Content-Type: text/plain; charset=utf-8
> > Content-Transfer-Encoding: 7bit
> > X-Originating-IP: [2.36.249.29]
> > X-Authenticated-User: ren...@aster-lab.com
> > Thread-Topic: rebooting vm using curl
> > Thread-Index: sY5vOHRRV8leNejtrVQPEr9k95xApQ==
> >
> > Hello,
> >
> > Knowing the vm id, the admin user and password, the proxmox LAN ip,
> > is it possible to reboot the vm using curl ?
> > which api string should I use ?
> >
> > Renato Gallo
> >
> >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> You find here the trick
>
> https://pve.proxmox.com/wiki/Proxmox_VE_API
>
> Diaolin
> ---
> par sèmpro
>
> Vöime bèn sol qoanche às tèmp
> anca ‘l fuss sol en menùt
> par mi e par ti
> e po’ pù niènt
>
> Giuliano
>
> Tel: 349 66 84 215
> Skype: diaolin
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] converting privileged CT to unprivileged

2019-07-31 Thread Gilberto Nunes
Or do that! Yep
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em qua, 31 de jul de 2019 às 13:05, Tom Weber  escreveu:
>
> Or just delete these files before shutting down the container and
> making the backup.
>
>   Tom
>
> Am Mittwoch, den 31.07.2019, 12:39 -0300 schrieb Gilberto Nunes:
> > You can uncompress the backup for any other directory, delete urandom
> > and random and then, compress again the whole directory...
> > Then try restore into PVE again.
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> > Em qua, 31 de jul de 2019 às 12:37, Adam Weremczuk
> >  escreveu:
> > >
> > > Hi all,
> > >
> > > PVE 5.4.6.
> > >
> > > My container was created as privileged and runs on zfs pool shared
> > > by 2
> > > hosts.
> > >
> > > I've unsuccessfully tried to convert it from GUI:
> > > - stopped the container
> > > - took a backup
> > > - clicked "restore" ("unprivileged" ticked - default)
> > >
> > > extracting archive
> > > '/var/lib/vz/dump/vzdump-lxc-100-2019_07_31-16_15_48.tar.lzo'
> > > tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not
> > > permitted
> > > tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not
> > > permitted
> > > Total bytes read: 619950080 (592MiB, 42MiB/s)
> > > tar: Exiting with failure status due to previous errors
> > > TASK ERROR: unable to restore CT 100 - command 'lxc-usernsexec -m
> > > u:0:10:65536 -m g:0:10:65536 -- tar xpf - --lzop --totals
> > > --one-file-system -p --sparse --numeric-owner --acls --xattrs
> > > '--xattrs-include=user.*' '--xattrs-include=security.capability'
> > > '--warning=no-file-ignored' '--warning=no-xattr-write' -C
> > > /var/lib/lxc/100/rootfs --skip-old-files --anchored --exclude
> > > './dev/*''
> > > failed: exit code 2
> > >
> > > CT 100 completely disappeared from the list!
> > >
> > > Earlier attempt from shell (105 was the first available ID):
> > >
> > > pct restore 105
> > > /var/lib/vz/dump/vzdump-lxc-100-2019_07_31-16_15_48.tar.lzo
> > > -ignore-unpack-errors 1 -unprivileged
> > > 400 Parameter verification failed.
> > > storage: storage 'local' does not support container directories
> > > pct restore   [OPTIONS]
> > >
> > > Any hints?
> > >
> > > Thanks,
> > > Adam
> > >
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] converting privileged CT to unprivileged

2019-07-31 Thread Gilberto Nunes
You can uncompress the backup for any other directory, delete urandom
and random and then, compress again the whole directory...
Then try restore into PVE again.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em qua, 31 de jul de 2019 às 12:37, Adam Weremczuk
 escreveu:
>
> Hi all,
>
> PVE 5.4.6.
>
> My container was created as privileged and runs on zfs pool shared by 2
> hosts.
>
> I've unsuccessfully tried to convert it from GUI:
> - stopped the container
> - took a backup
> - clicked "restore" ("unprivileged" ticked - default)
>
> extracting archive
> '/var/lib/vz/dump/vzdump-lxc-100-2019_07_31-16_15_48.tar.lzo'
> tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
> tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
> Total bytes read: 619950080 (592MiB, 42MiB/s)
> tar: Exiting with failure status due to previous errors
> TASK ERROR: unable to restore CT 100 - command 'lxc-usernsexec -m
> u:0:10:65536 -m g:0:10:65536 -- tar xpf - --lzop --totals
> --one-file-system -p --sparse --numeric-owner --acls --xattrs
> '--xattrs-include=user.*' '--xattrs-include=security.capability'
> '--warning=no-file-ignored' '--warning=no-xattr-write' -C
> /var/lib/lxc/100/rootfs --skip-old-files --anchored --exclude './dev/*''
> failed: exit code 2
>
> CT 100 completely disappeared from the list!
>
> Earlier attempt from shell (105 was the first available ID):
>
> pct restore 105
> /var/lib/vz/dump/vzdump-lxc-100-2019_07_31-16_15_48.tar.lzo
> -ignore-unpack-errors 1 -unprivileged
> 400 Parameter verification failed.
> storage: storage 'local' does not support container directories
> pct restore   [OPTIONS]
>
> Any hints?
>
> Thanks,
> Adam
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.4 and Intel ixgbe

2019-07-29 Thread Gilberto Nunes
Yep!
The customer (after a long discussion! Customer stubborn!) he changed
from CISCO to INTEL SPF+ and everything works fine!

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36



Em seg, 29 de jul de 2019 às 09:29, Immo Wetzel
 escreveu:
>
> Yes you point to the right corner.
> The SFP+ must be an intel one for the Intel Corporation 82599ES 10-Gigabit 
> SFI/SFP+ Network Connection card. Even some Finisars are right labled but not 
> officialy supported.
> Check with FS.com or other supplier if you find a suitable SFP+ for your 
> setup.
>
> Immo
>
> This message has been classified General Business by Immo Wetzel on Montag, 
> 29. Juli 2019 at 14:27:10.
>
> From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of 
> Gilberto Nunes
> Sent: Wednesday, July 17, 2019 8:06 PM
> To: PVE User List
> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
>
> So here the scenario...
> The customer (which is in other city, far away from my current localtion!)
> has the Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection
> but the GBIC is CISCO and the DAC cable is Mikrotik...
> Should CISCO GBIC incompatible with Intel card?? Or maybe the DAC cable
> from Mikrotik??
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua, 17 de jul de 2019 às 14:53, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
> > I am not sure about it, because now, after a fresh installation, even de
> > enp4sf0 appears...
> > This is so frustrated!
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qua, 17 de jul de 2019 às 14:45, Alex Chekholko via pve-user <
> > pve-user@pve.proxmox.com> escreveu:
> >
> >>
> >>
> >>
> >> -- Forwarded message --
> >> From: Alex Chekholko 
> >> To: PVE User List 
> >> Cc:
> >> Bcc:
> >> Date: Wed, 17 Jul 2019 10:44:07 -0700
> >> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
> >> You can try 'modinfo ixgbe' to query your actual installed version to see
> >> all the parameters it knows about.
> >>
> >> I see on one of my hosts
> >> # modinfo ixgbe
> >> filename:
> >>
> >> /lib/modules/4.15.0-54-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
> >> version: 5.1.0-k
> >> ...
> >> parm: allow_unsupported_sfp:Allow unsupported and untested SFP+
> >> modules on 82599-based adapters (uint)
> >>
> >> And you can check the exact release notes for your version to see the
> >> allowed values for that parameter.
> >>
> >> IME, you may have some kind of incompable cable/optic anyway; see if you
> >> can try a different one.
> >>
> >> Regards,
> >> Alex
> >>
> >> On Wed, Jul 17, 2019 at 9:33 AM Gilberto Nunes <
> >> gilberto.nune...@gmail.com>
> >> wrote:
> >>
> >> > Hi there everybody!
> >> >
> >> > I have installed PVE 5.4 and try to up ixgbe driver for Intel 10GB
> >> SFI/SPF+
> >> > NIC...
> >> >
> >> > I already do ixgbe-options.conf with
> >> >
> >> > options ixgbe allow_unsupported_sfp=1
> >> >
> >> > But when try to load the module I still get this error:
> >> >
> >> > [ 170.008236] ixgbe :05:00.0: failed to load because an unsupported
> >> > SFP+ or QSFP module type was detected.
> >> >
> >> > [ 170.008262] ixgbe :05:00.0: Reload the driver after installing a
> >> > supported module.
> >> >
> >> > [ 170.022268] ixgbe :05:00.1: failed to load because an unsupported
> >> > SFP+ or QSFP module type was detected.
> >> >
> >> > [ 170.022291] ixgbe :05:00.1: Reload the driver after installing a
> >> > supported module.
> >> >
> >> > I already try to compile Intel module from scratch, but seems to failed
> >> > too!
> >> >
> >> > Thanks for any help!
> >> >
> >> > lspci
> >> >
> >> > 08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> >> SFI/SFP+
> >> > Network Connection (rev 01)
> >&g

[PVE-User] Proxmox VE old versions...

2019-07-19 Thread Gilberto Nunes
Hi there
I am looking for old Proxmox version, 1.X series...
Somebody can point a download site??

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.4 and Intel ixgbe

2019-07-17 Thread Gilberto Nunes
So here the scenario...
The customer (which is in other city, far away from my current localtion!)
has the Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection
but the GBIC is CISCO and the DAC cable is Mikrotik...
Should CISCO GBIC incompatible with Intel card?? Or maybe the DAC cable
from Mikrotik??
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 17 de jul de 2019 às 14:53, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> I am not sure about it, because now, after a fresh installation, even de
> enp4sf0 appears...
> This is so frustrated!
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua, 17 de jul de 2019 às 14:45, Alex Chekholko via pve-user <
> pve-user@pve.proxmox.com> escreveu:
>
>>
>>
>>
>> -- Forwarded message --
>> From: Alex Chekholko 
>> To: PVE User List 
>> Cc:
>> Bcc:
>> Date: Wed, 17 Jul 2019 10:44:07 -0700
>> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
>> You can try 'modinfo ixgbe' to query your actual installed version to see
>> all the parameters it knows about.
>>
>> I see on one of my hosts
>> # modinfo ixgbe
>> filename:
>>
>> /lib/modules/4.15.0-54-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
>> version:5.1.0-k
>> ...
>> parm:   allow_unsupported_sfp:Allow unsupported and untested SFP+
>> modules on 82599-based adapters (uint)
>>
>> And you can check the exact release notes for your version to see the
>> allowed values for that parameter.
>>
>> IME, you may have some kind of incompable cable/optic anyway; see if you
>> can try a different one.
>>
>> Regards,
>> Alex
>>
>> On Wed, Jul 17, 2019 at 9:33 AM Gilberto Nunes <
>> gilberto.nune...@gmail.com>
>> wrote:
>>
>> > Hi there everybody!
>> >
>> > I have installed PVE 5.4 and try to up ixgbe driver for Intel 10GB
>> SFI/SPF+
>> > NIC...
>> >
>> > I already do ixgbe-options.conf with
>> >
>> > options ixgbe allow_unsupported_sfp=1
>> >
>> > But when try to load the module I still get this error:
>> >
>> > [ 170.008236] ixgbe :05:00.0: failed to load because an unsupported
>> > SFP+ or QSFP module type was detected.
>> >
>> > [ 170.008262] ixgbe :05:00.0: Reload the driver after installing a
>> > supported module.
>> >
>> > [ 170.022268] ixgbe :05:00.1: failed to load because an unsupported
>> > SFP+ or QSFP module type was detected.
>> >
>> > [ 170.022291] ixgbe :05:00.1: Reload the driver after installing a
>> > supported module.
>> >
>> > I already try to compile Intel module from scratch, but seems to failed
>> > too!
>> >
>> > Thanks for any help!
>> >
>> > lspci
>> >
>> > 08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
>> SFI/SFP+
>> > Network Connection (rev 01)
>> > 08:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
>> SFI/SFP+
>> > Network Connection (rev 01)
>> >
>> >
>> > pveversion
>> >
>> > pve-manager/5.4-11/6df3d8d0 (running kernel: 4.15.18-18-pve)
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> > (47) 3025-5907
>> > (47) 99676-7530 - Whatsapp / Telegram
>> >
>> > Skype: gilberto.nunes36
>> > ___
>> > pve-user mailing list
>> > pve-user@pve.proxmox.com
>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >
>>
>>
>>
>> -- Forwarded message --
>> From: Alex Chekholko via pve-user 
>> To: PVE User List 
>> Cc: Alex Chekholko 
>> Bcc:
>> Date: Wed, 17 Jul 2019 10:44:07 -0700
>> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.4 and Intel ixgbe

2019-07-17 Thread Gilberto Nunes
I am not sure about it, because now, after a fresh installation, even de
enp4sf0 appears...
This is so frustrated!


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 17 de jul de 2019 às 14:45, Alex Chekholko via pve-user <
pve-user@pve.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: Alex Chekholko 
> To: PVE User List 
> Cc:
> Bcc:
> Date: Wed, 17 Jul 2019 10:44:07 -0700
> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
> You can try 'modinfo ixgbe' to query your actual installed version to see
> all the parameters it knows about.
>
> I see on one of my hosts
> # modinfo ixgbe
> filename:
>
> /lib/modules/4.15.0-54-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
> version:5.1.0-k
> ...
> parm:   allow_unsupported_sfp:Allow unsupported and untested SFP+
> modules on 82599-based adapters (uint)
>
> And you can check the exact release notes for your version to see the
> allowed values for that parameter.
>
> IME, you may have some kind of incompable cable/optic anyway; see if you
> can try a different one.
>
> Regards,
> Alex
>
> On Wed, Jul 17, 2019 at 9:33 AM Gilberto Nunes  >
> wrote:
>
> > Hi there everybody!
> >
> > I have installed PVE 5.4 and try to up ixgbe driver for Intel 10GB
> SFI/SPF+
> > NIC...
> >
> > I already do ixgbe-options.conf with
> >
> > options ixgbe allow_unsupported_sfp=1
> >
> > But when try to load the module I still get this error:
> >
> > [ 170.008236] ixgbe :05:00.0: failed to load because an unsupported
> > SFP+ or QSFP module type was detected.
> >
> > [ 170.008262] ixgbe :05:00.0: Reload the driver after installing a
> > supported module.
> >
> > [ 170.022268] ixgbe :05:00.1: failed to load because an unsupported
> > SFP+ or QSFP module type was detected.
> >
> > [ 170.022291] ixgbe :05:00.1: Reload the driver after installing a
> > supported module.
> >
> > I already try to compile Intel module from scratch, but seems to failed
> > too!
> >
> > Thanks for any help!
> >
> > lspci
> >
> > 08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> SFI/SFP+
> > Network Connection (rev 01)
> > 08:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> SFI/SFP+
> > Network Connection (rev 01)
> >
> >
> > pveversion
> >
> > pve-manager/5.4-11/6df3d8d0 (running kernel: 4.15.18-18-pve)
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
>
>
> -- Forwarded message --
> From: Alex Chekholko via pve-user 
> To: PVE User List 
> Cc: Alex Chekholko 
> Bcc:
> Date: Wed, 17 Jul 2019 10:44:07 -0700
> Subject: Re: [PVE-User] PVE 5.4 and Intel ixgbe
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 5.4 and Intel ixgbe

2019-07-17 Thread Gilberto Nunes
Hi there everybody!

I have installed PVE 5.4 and try to up ixgbe driver for Intel 10GB SFI/SPF+
NIC...

I already do ixgbe-options.conf with

options ixgbe allow_unsupported_sfp=1

But when try to load the module I still get this error:

[ 170.008236] ixgbe :05:00.0: failed to load because an unsupported
SFP+ or QSFP module type was detected.

[ 170.008262] ixgbe :05:00.0: Reload the driver after installing a
supported module.

[ 170.022268] ixgbe :05:00.1: failed to load because an unsupported
SFP+ or QSFP module type was detected.

[ 170.022291] ixgbe :05:00.1: Reload the driver after installing a
supported module.

I already try to compile Intel module from scratch, but seems to failed too!

Thanks for any help!

lspci

08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection (rev 01)
08:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection (rev 01)


pveversion

pve-manager/5.4-11/6df3d8d0 (running kernel: 4.15.18-18-pve)
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 6.0 released!

2019-07-16 Thread Gilberto Nunes
Thanks to all Proxmox staff...
You guys make a marvels job...
I would like to know if qemu 4.X bring the Vm fault tolerance, like COLO ou
microcheckpoint and if Proxmox will incorporeted that features in the next
future!

Thanks a lot
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 16 de jul de 2019 às 09:00, lists  escreveu:

> Many congratulations!
>
> This definitely looks like a great release!
>
> We will checkout the new features in our test cluster!
>
> Thanks.
>
> MJ
>
> On 16-7-2019 13:19, Martin Maurer wrote:
> > Hi all,
> >
> > We're excited to announce the final release of our Proxmox VE 6.0! It's
> > based on the great Debian 10 codename "Buster" and the latest 5.0 Linux
> > kernel, QEMU 4.0, LXC 3.1.0, ZFS 0.8.1, Ceph 14.2, Corosync 3.0, and
> more.
> >
> > This major release includes the latest Ceph Nautilus feautures and an
> > improved Ceph management dashboard. We have updated the cluster
> > communication stack to Corosync 3 using Kronosnet, and have a new
> > selection widget for the network making it simple to select the correct
> > link address in the cluster creation wizard.
> >
> > With ZFS 0.8.1 we have included TRIM support for SSDs and also support
> > for native encryption with comfortable key-handling.
> >
> > The new installer supports ZFS root via UEFI, for example you can boot a
> > ZFS mirror on NVMe SSDs (using systemd-boot instead of grub).
> >
> > And as always we have included countless bugfixes and improvements on a
> > lot of places; see the release notes for all details.
> >
> > Release notes
> > https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.0
> >
> > Video intro
> >
> https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-0
> >
> >
> > Download
> > https://www.proxmox.com/en/downloads
> > Alternate ISO download:
> > http://download.proxmox.com/iso/
> >
> > Documentation
> > https://pve.proxmox.com/pve-docs/
> >
> > Community Forum
> > https://forum.proxmox.com
> >
> > Source Code
> > https://git.proxmox.com
> >
> > Bugtracker
> > https://bugzilla.proxmox.com
> >
> > FAQ
> > Q: Can I dist-upgrade Proxmox VE 5.4 to 6.0 with apt?
> > A: Please follow the upgrade instructions exactly, as there is a major
> > version bump of corosync (2.x to 3.x)
> > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
> >
> > Q: Can I install Proxmox VE 6.0 on top of Debian Buster?
> > A: Yes, see
> > https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
> >
> > Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.0
> > with Ceph Nautilus?
> > A: This is a two step process. First, you have to upgrade Proxmox VE
> > from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus.
> > There are a lot of improvements and changes, please follow exactly the
> > upgrade documentation.
> > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
> > https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus
> >
> > Q: Where can I get more information about future feature updates?
> > A: Check our roadmap, forum, mailing list and subscribe to our
> newsletter.
> >
> > A big THANK YOU to our active community for all your feedback, testing,
> > bug reporting and patch submitting!
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ixgbe

2019-07-09 Thread Gilberto Nunes
Hi there

I have created the ixgbe.conf file into /etc/modprobe.d, with this:

options ixgbe  allow_unsupported_sfp=1.1
I haved compile the module from Intel source code...
The worst thing is that this network are thousands of kilometers from my
location...
So a partner is check that network, looking for some issues in the switch,
FC cable whatever...
I am still waiting for him
Any way, thanks for answer...
I'ill keep you posted.

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 9 de jul de 2019 às 01:45, Dmitry Petuhov 
escreveu:

> What command
>
> dmesg | grep ixgbe
>
> shows after boot?
>
>
> 09.07.2019 2:47, Gilberto Nunes пишет:
> > Hi there
> >
> > We have some issues with driver ixgbe in Proxmox 5.4.10!
> > Server is fully update but the NIC doesn't show any link at all!
> > Somebody can help, please!
> > Thanks a lot.
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ixgbe

2019-07-08 Thread Gilberto Nunes
Hi there

We have some issues with driver ixgbe in Proxmox 5.4.10!
Server is fully update but the NIC doesn't show any link at all!
Somebody can help, please!
Thanks a lot.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Trouble with LXC Backup

2019-07-08 Thread Gilberto Nunes
Hi There! Somebody here has trouble when make LXC Backup with 1 TB of
rootdisk???

I have one LXC that takes forever and stuck... This is pretty anoying...

Anybody? Thanks a lot

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 6.0 beta released!

2019-07-05 Thread Gilberto Nunes
Hi there! Correct me if I wrong, but after make changes in network still
need reboot the server?

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex, 5 de jul de 2019 às 06:16, Gilou 
escreveu:

> Le 04/07/2019 à 21:06, Martin Maurer a écrit :
> > Hi all!
> >
> > We're happy to announce the first beta release for the Proxmox VE 6.x
> > family! It's based on the great Debian Buster (Debian 10) and a 5.0
> > kernel, QEMU 4.0, ZFS 0.8.1, Ceph 14.2.1, Corosync 3.0 and countless
> > improvements and bugfixes. The new installer supports ZFS root via UEFI,
> > for example you can boot a ZFS mirror on NVMe SSDs (using systemd-boot
> > instead of grub). The full release notes will be available together with
> > the final release announcement.
> >
> > For more details, see:
> > https://forum.proxmox.com/threads/proxmox-ve-6-0-beta-released.55670/
> >
>
> Awesome work, we'll be testing it for sure. Thanks for the nice doc and
> beta work ahead of Debian 10, it feels really good to see you following
> them closely.
>
> Cheers,
>
> Gilou
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 6.0 beta released!

2019-07-04 Thread Gilberto Nunes
Good job!
Is there something about qemu mc?? This kind of thing has been add to qemu
mainstream tree or devel tree??
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 4 de jul de 2019 às 16:06, Martin Maurer 
escreveu:

> Hi all!
>
> We're happy to announce the first beta release for the Proxmox VE 6.x
> family! It's based on the great Debian Buster (Debian 10) and a 5.0 kernel,
> QEMU 4.0, ZFS 0.8.1, Ceph 14.2.1, Corosync 3.0 and countless improvements
> and bugfixes. The new installer supports ZFS root via UEFI, for example you
> can boot a ZFS mirror on NVMe SSDs (using systemd-boot instead of grub).
> The full release notes will be available together with the final release
> announcement.
>
> For more details, see:
> https://forum.proxmox.com/threads/proxmox-ve-6-0-beta-released.55670/
>
> --
> Best Regards,
>
> Martin Maurer
>
> mar...@proxmox.com
> https://www.proxmox.com
>
> ___
> pve-devel mailing list
> pve-de...@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph to Hyper-V or VMWare

2019-06-03 Thread Gilberto Nunes
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em seg, 3 de jun de 2019 às 17:05, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi
>
> Actually my need will be supplied by this info
>
> "configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
> vmware to consume storage from : yes possible, but will require some cli
> usage."
>
> I need Hyper-V or VMWare consume storage from PVE CEPH cluster storage...
>
> I will appreciated if you could point me some clues about it...
>
> Thanks a lot
> --
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em seg, 3 de jun de 2019 às 16:48, Ronny Aasen 
> escreveu:
>
>> On 03.06.2019 20:42, Gilberto Nunes wrote:
>> > Hi there
>> >
>> > Simple question: Is there any way to connect a Proxmox Ceph cluster
>> with a
>> > Hyper-V or VMWare server??
>>
>>
>> Define "connect"..
>>
>> manage vmware/hyper-v/proxmox hosts in the proxmox web interface : no
>> not possible.
>>
>> have them connected to subnets where the hosts are pingable, and copy vm
>> image files over the network:  yes possible.
>>
>> configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
>> vmware to consume storage from : yes possible, but will require some cli
>> usage.
>>
>> migrare vm's between proxmox and vmware and hyper-v: no, not in a
>> interface. you can do a manual migration, where you move a disk image
>> over and set up a new vm on the other platform using the old disk
>> ofcourse.
>>
>>
>> what did you have in mind?
>>
>>
>> kind regards
>> Ronny Aasen
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph to Hyper-V or VMWare

2019-06-03 Thread Gilberto Nunes
Hi

Actually my need will be supplied by this info

"configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
vmware to consume storage from : yes possible, but will require some cli
usage."

I need Hyper-V or VMWare consume storage from PVE CEPH cluster storage...

I will appreciated if you could point me some clues about it...

Thanks a lot
--
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em seg, 3 de jun de 2019 às 16:48, Ronny Aasen 
escreveu:

> On 03.06.2019 20:42, Gilberto Nunes wrote:
> > Hi there
> >
> > Simple question: Is there any way to connect a Proxmox Ceph cluster with
> a
> > Hyper-V or VMWare server??
>
>
> Define "connect"..
>
> manage vmware/hyper-v/proxmox hosts in the proxmox web interface : no
> not possible.
>
> have them connected to subnets where the hosts are pingable, and copy vm
> image files over the network:  yes possible.
>
> configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
> vmware to consume storage from : yes possible, but will require some cli
> usage.
>
> migrare vm's between proxmox and vmware and hyper-v: no, not in a
> interface. you can do a manual migration, where you move a disk image
> over and set up a new vm on the other platform using the old disk ofcourse.
>
>
> what did you have in mind?
>
>
> kind regards
> Ronny Aasen
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox Ceph to Hyper-V or VMWare

2019-06-03 Thread Gilberto Nunes
Hi there

Simple question: Is there any way to connect a Proxmox Ceph cluster with a
Hyper-V or VMWare server??

Thanks a lot.
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] creating 2 node cluster via crossover cable

2019-05-30 Thread Gilberto Nunes
Hi! In this situation, I usually fill /etc/hosts to use the secondary IP
pointer to a hostname... For me, this way is easier...
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 30 de mai de 2019 às 12:15, Adam Weremczuk 
escreveu:

> Hello,
>
> I have 2 nodes running the latest PVE 5.4.
>
> The servers have 4 x Gb ports each so I've decided to bond 2 and connect
> to a switch.
>
> The remaining 2 are bonded and used for connecting nodes directly with a
> pair of crossover cables.
>
> This is really what I want as both nodes will be using own local storage
> and and act as an active-passive node (shared zfs pool).
>
> This sync traffic will be much better suited to use a direct route
> rather than mix with CT/VM and other traffic served by the switch in front.
>
> And I'm not ready to spend money on a new 10 Gb switch + cards.
>
> Anyway, I've tried creating a cluster using these "crossover" IPs (Ring
> 0 address) but it seems to be persisting of using primary IPs.
>
> I could fiddle with /etc/hosts and probably get somewhere but I was
> wondering if there is an official, clean and reliable way of achieving
> this?
>
> Thanks,
> Adam
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 5.4 released!

2019-04-11 Thread Gilberto Nunes
Nice job

Thanks a lot
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 11 de abr de 2019 às 07:10, Martin Maurer 
escreveu:

> Hi all!
>
> We are very pleased to announce the general availability of Proxmox VE 5.4.
>
> Built on Debian 9.8 (Stretch) and a specially modified Linux Kernel
> 4.15, this version of Proxmox VE introduces a new wizard for installing
> Ceph storage via the user interface, and brings enhanced flexibility
> with HA clustering, hibernation support for virtual machines, and
> support for Universal Second Factor (U2F) authentication.
>
> The new features of Proxmox VE 5.4 focus on usability and simple
> management of the software-defined infrastructure as well as on security
> management.
>
> Countless bugfixes and more smaller improvements are listed in the
> release notes.
>
> Release notes
> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.4
>
> Video tutorial
> What's new in Proxmox VE 5.4?
> https://www.proxmox.com/en/training/video-tutorials
>
> Download
> https://www.proxmox.com/en/downloads
> Alternate ISO download:
> http://download.proxmox.com/iso/
>
> Dokumentation
> https://pve.proxmox.com/pve-docs/
>
> Source Code
> https://git.proxmox.com
>
> Bugtracker
> https://bugzilla.proxmox.com
>
> FAQ
> Q: Can I install Proxmox VE 5.4 on top of Debian Stretch?
> A: Yes, see
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch
>
> Q: Can I upgrade Proxmox VE 5.x to 5.4 with apt?
> A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade
>
> Q: Can I upgrade Proxmox VE 4.x to 5.4 with apt dist-upgrade?
> A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0. If you
> Ceph on V4.x please also check
> https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous. Please note,
> Proxmox VE 4.x is already end of support since June 2018, see Proxmox VE
> Support Lifecycle
> https://forum.proxmox.com/threads/proxmox-ve-support-lifecycle.35755/
>
> Many THANKS to our active community for all your feedback, testing, bug
> reporting and patch submitting!
>
> --
> Best Regards,
>
> Martin Maurer
>
> Proxmox VE project leader
>
> mar...@proxmox.com
> https://www.proxmox.com
>
>
> ___
> pve-devel mailing list
> pve-de...@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM hang on random no block pool initialized

2019-04-10 Thread Gilberto Nunes
Hi there. After successfuly export and import from Ec2 Amazon a Ubuntu
running Instance, when try to start it in Proxmox (lastest version) I get a
random no block pool initialized...
Any one can help with this issue?
Thanks in advance!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE Firewall Port forwarding...

2019-04-06 Thread Gilberto Nunes
Hi there...

Is there any way to use port forward in PVE Firewall?
I tried to create a sec group but doesn't work.
So I need to create by hand the rule below, in order to work properly:

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 10200 -j DNAT
--to-destination aaa.bbb.ccc.ddd:10200

I suppose in web gui would be nice to have this feature...

Thanks

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CEPH and huge pages

2019-03-26 Thread Gilberto Nunes
I back huge page from always to madvise and everything is ok...

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 26 de mar de 2019 às 16:07, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi there
>
> I have 3 Proxmox servers which is a ceph cluster as well.
> This morning I change the huge_pages from madvise to always and get a lot
> of trouble with pg inconsistence...
>
> Is there any knowing issue with huge pages and Ceph?
>
> All servers has more than 100 GB of memory.
> Proxmox 5-2.2.
> Use Ceph with bluestore, with SATA and SAS HDD...
>
> Thanks a lot
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] CEPH and huge pages

2019-03-26 Thread Gilberto Nunes
Hi there

I have 3 Proxmox servers which is a ceph cluster as well.
This morning I change the huge_pages from madvise to always and get a lot
of trouble with pg inconsistence...

Is there any knowing issue with huge pages and Ceph?

All servers has more than 100 GB of memory.
Proxmox 5-2.2.
Use Ceph with bluestore, with SATA and SAS HDD...

Thanks a lot

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Weired trouble with NIC in Windows Server

2019-03-13 Thread Gilberto Nunes
Sorry
I meant "weird"
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 13 de mar de 2019 às 19:07, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Hi there
> I am facing a weired problem with NIC in Windows Server.
> When use vmbr0, the NIC in Windows server is assuming the IP
> 169.bla.bla... And not the network IP 192.168.0.X.
> Anybody has the same issue? How solved it?
> Thanks
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Weired trouble with NIC in Windows Server

2019-03-13 Thread Gilberto Nunes
Hi there
I am facing a weired problem with NIC in Windows Server.
When use vmbr0, the NIC in Windows server is assuming the IP 169.bla.bla...
And not the network IP 192.168.0.X.
Anybody has the same issue? How solved it?
Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage recommendations

2019-02-26 Thread Gilberto Nunes
Hi

IMHO, ceph are very good.
I have a production environment, using 6 servers in Proxmox CEPH Cluster.
No data loss so far. Performance is good. Unfortunately the hardware which
I have at hand are low in memory ram... But still, OK for most of the job,
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 26 de fev de 2019 às 13:21, Eneko Lacunza 
escreveu:

> Hi
>
> El 26/2/19 a las 10:41, Thomas Lamprecht escribió:
> > On 2/25/19 6:22 PM, Frederic Van Espen wrote:
> >> We're designing a new datacenter network where we will run proxmox
> nodes on
> >> about 30 servers. Of course, shared storage is a part of the design.
> >>
> >> What kind of shared storage would anyone recommend based on their
> >> experience and what kind of network equipment would be essential in that
> >> design? Let's assume for a bit that budget is not constrained too much.
> We
> >> should be able to afford a vendor specific iSCSI device, or be able to
> >> implement an open source solution like Ceph.
> >>
> >> Concerning storage space and IOPS requirements, we're very modest in the
> >> current setup (about 13TB storage space used, very modest IOPS, about
> 6500
> >> write IOPS and 4200 read IOPS currently distributed in the whole network
> >> according to the prometheus monitoring).
> >>
> >> Key in the whole setup is day to day maintainability and scalability.
> > I'd use ceph then. Scalability is something ceph is just made for, and
> > maintainability is also really not to bad, IMO. You can do CTs and VMs on
> > normal blockdevices (rdb) and also have a file based shared FS (cephFS)
> both
> > well integrated into PVE frontend/backend, which other shared storage
> systems
> > aren't.
> >
> We maintain 8 tiny Proxmox clusters with shared storage, 7 of them use
> Ceph and the other an EMC VNXe3200 iSCSI device.
>
> VNXe3200 works well (a bit slow maybe) and doesn't seem to be able to
> scale.
>
> Ceph has worked really well for us, even the clusters being tiny (3 to 4
> nodes). You'll have to learn a bit about Ceph, but it really pays off
> and the integration in Proxmox is really cool.
>
> Just use the money saved on iSCSI propietary stuff on some more
> servers/disks and get better performance an reliability.
>
> Cheers
> Eneko
>
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943569206
> Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph Cluster and osd_target_memory

2019-02-19 Thread Gilberto Nunes
Hi

I have 15 SATA HDD with 3, 2 and 1 TB all 7.2 RPM.
And 6 SATA HDD with 4 and 8 TB all 5.9 RPM.
Is there a way to convert bluestore to filestore?
Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 19 de fev de 2019 às 04:03, Alexandre DERUMIER 
escreveu:

> How many disk do you have in you node ?
>
> maybe using filestore could help if you are low in memory.
>
>
> - Mail original -----
> De: "Gilberto Nunes" 
> À: "proxmoxve" 
> Envoyé: Lundi 18 Février 2019 03:37:34
> Objet: [PVE-User] Proxmox Ceph Cluster and osd_target_memory
>
> Hi there
>
> I have a system w/ 6 PVE Ceph Cluster and all of then are low memory - 16G
> to be more exactly.
> Is this reasonable lower osd_target_memory to 2GB, in order to make system
> run more smoothly? I cannot buy more memory for now...
> Thanks for any advice
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Recomendation for Cluster and HA with iSCSI...

2019-02-18 Thread Gilberto Nunes
Hi there

We have 3 servers and one Storage Dell PowerVault MD3200i.
The best way to use this Storage iSCSI is using LVM on top of iSCSI or
using directly LUN's?
I will migrate Citrix XenServer to Proxmox, so XVA will be a raw imagem.
Thanks for any advice

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox Ceph Cluster and osd_target_memory

2019-02-17 Thread Gilberto Nunes
Hi there

I have a system w/ 6 PVE Ceph Cluster and all of then are low memory - 16G
to be more exactly.
Is this reasonable lower osd_target_memory to 2GB, in order to make system
run more smoothly? I cannot buy more memory for now...
Thanks for any advice

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Restore VM Backup in new HDD with new size...

2019-02-06 Thread Gilberto Nunes
Never mind.
I just make a backup and restore in different VMID and everything is fine!

Thanks a lot

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 6 de fev de 2019 às 08:54, Yannis Milios 
escreveu:

> Since it is a Linux installation, you could try to backup the system via a
> livecd with fsarchiver to an external drive, then restore it to the virtual
> disk.
>
> http://www.fsarchiver.org/
>
> Yannis
>
> On Wed, 6 Feb 2019 at 10:18, Gilberto Nunes 
> wrote:
>
> > Hi list
> >
> > I have here a VM with has directly attached 2 HDD disk, each one has 1 TB
> > of size.
> > I had make a bacup an the final vma file has 95GB. I know this is
> > compressed.
> > My question in:
> > Is there a way to restore this VM backup but create a virtual HDD with
> less
> > size than the original disks??
> > I meant, restore a VM which has 1 TB of HDD to a VM which a new HDD with
> > 200 GB for instance.
> > My problem here is that I need to release this 2 physical HDD. This 2 HDD
> > has a CentOS 7 installed, with LVM! I tried clonezilla but doesn't work.
> >
> > Thanks for any help
> >
> > Best
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Restore VM Backup in new HDD with new size...

2019-02-06 Thread Gilberto Nunes
Hi list

I have here a VM with has directly attached 2 HDD disk, each one has 1 TB
of size.
I had make a bacup an the final vma file has 95GB. I know this is
compressed.
My question in:
Is there a way to restore this VM backup but create a virtual HDD with less
size than the original disks??
I meant, restore a VM which has 1 TB of HDD to a VM which a new HDD with
200 GB for instance.
My problem here is that I need to release this 2 physical HDD. This 2 HDD
has a CentOS 7 installed, with LVM! I tried clonezilla but doesn't work.

Thanks for any help

Best
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


  1   2   3   4   5   6   7   8   9   >