Re: [PVE-User] Connect PVE Box to 2 iscsi server...

2019-08-22 Thread Gilberto Nunes
Hi Ronny

I'll try it then report if here!

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em sex, 16 de ago de 2019 às 17:53, Ronny Aasen
 escreveu:
>
> On 16.08.2019 18:06, Gilberto Nunes wrote:
> > Hi there
> >
> > Here I have two iscsi servers, that work together in order to provide
> > a single connection for a certain initiator.
> > Well at least to a Windows box when using MS iSCSI initiator.
> > In such OS, I was able to connect using both iSCSI servers, and in
> > Windows storage manager, I see just one HDD.
> > So when I shutdown iscsi serverA, the HDD remain up and running, in Windows 
> > BOX.
> > However, when I try to do that in PVE Box, I was enable to active
> > multipath and I see both HDD from iSCSI servers, /dev/sdb and
> > /dev/sdc.
> > But I could not figure out how to make PVE see both HDD in storace.cfg
> > in a single one storage, just like Windows box do!
> > What do I missing??
>
> Hello
>
> I assume you need to use multipathd. I have something similar using
> Fiberchannel disks, but the method should be similar.
>
> if it is not installed, you need to install  multipath-tools using apt.
> check that multipathd is running.
>
> run "multipath -v2" it should scan and create the multipath device.
>
> with multipath -ll and dmsetup ls --tree  ; you should now see the multiple 
> disks come together as a single device.
>
> # example
> # multipath -ll
> 360example0292e42000b dm-2 FUJITSU,ETERNUS_DXL
> size=5.0T features='2 queue_if_no_path retain_attached_hw_handler' 
> hwhandler='0' wp=rw
> |-+- policy='service-time 0' prio=50 status=active
> | |- 0:0:1:0 sdc 8:32 active ready running
> | `- 2:0:1:0 sde 8:64 active ready running
> `-+- policy='service-time 0' prio=10 status=enabled
>|- 0:0:0:0 sdb 8:16 active ready running
>`- 2:0:0:0 sdd 8:48 active ready running
>
> verify you see the device under
> /dev/mapper/360example0292e42000b
>
>
> use the device as you see fit. We use it as a shared lvm VG between multiple 
> nodes for HA and failover.
> But you can also format as your filesystem of choise and mount it as a 
> directory, or run a cluster filesystem like ocfs or gfs and mount it as a 
> shared directory.
>
> good luck
> Ronny
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Will subscription work behind NAT?

2019-08-22 Thread Eneko Lacunza

Hi,


El 22/8/19 a las 12:26, Patrick Westenberg escribió:

will the subscription check work if hosts have private IPs only and are
not accessible from the web?
Yes, it works it the hosts have access to internet via HTTP/HTTPS (i.e. 
apt-get update works for example).


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Will subscription work behind NAT?

2019-08-22 Thread Patrick Westenberg
Hello,

will the subscription check work if hosts have private IPs only and are
not accessible from the web?

Regards
Patrick

-- 
Westenberg + Kueppers GbR  Spanische Schanzen 37
 Buero Koeln   47495 Rheinberg
pwestenb...@wk-serv.de Tel.: +49 (0)2843 90369-06
http://www.wk-serv.de  Fax : +49 (0)2843 90369-07
Gesellschafter: Sebastian Kueppers & Patrick Westenberg
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread lord_Niedzwiedz

Hello,

Disks are nvme (m.2), inserted through pci-e plates.
Until now, everything worked fine and never hung on proxmox 5-4.

root@tomas:/var/log# smartctl -a /*/dev/sda*/
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda ES.2
Device Model: ST31000340NS
Serial Number:    9QJ2LV6L
LU WWN Device Id: 5 000c50 01082a141
Firmware Version: SN05
User Capacity:    1,000,203,804,160 bytes [1.00 TB]
Sector Size:  512 bytes logical/physical
Rotation Rate:    7200 rpm
Device is:    In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 2.6, 3.0 Gb/s
Local Time is:    Thu Aug 22 10:16:10 2019 CEST

==> WARNING: There are known problems with these drives,
see the following Seagate web pages:
http://knowledge.seagate.com/articles/en_US/FAQ/207931en
http://knowledge.seagate.com/articles/en_US/FAQ/207963en

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)    Offline data collection activity
                    was completed without error.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0)    The previous self-test 
routine completed

                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (  642) seconds.
Offline data collection
capabilities:              (0x7b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:    (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:    (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 237) minutes.
Conveyance self-test routine
recommended polling time:      (   2) minutes.
SCT capabilities:        (0x003d)    SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000f   078   063   044    Pre-fail 
Always   -   60157077
  3 Spin_Up_Time    0x0003   099   099   000    Pre-fail 
Always   -   0
  4 Start_Stop_Count    0x0032   100   100   020    Old_age 
Always   -   119
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail 
Always   -   3
  7 Seek_Error_Rate 0x000f   080   060   030    Pre-fail 
Always   -   22054801530
  9 Power_On_Hours  0x0032   089   011   000    Old_age 
Always   -   10379
 10 Spin_Retry_Count    0x0013   100   100   097    Pre-fail 
Always   -   0
 12 Power_Cycle_Count   0x0032   100   037   020    Old_age 
Always   -   120
184 End-to-End_Error    0x0032   100   100   099    Old_age 
Always   -   0
187 Reported_Uncorrect  0x0032   100   100   000    Old_age 
Always   -   0
188 Command_Timeout 0x0032   100   100   000    Old_age 
Always   -   0
189 High_Fly_Writes 0x003a   100   100   000    Old_age 
Always   -   0
190 Airflow_Temperature_Cel 0x0022   067   048   045    Old_age 
Always   -   33 (Min/Max 28/34)
194 Temperature_Celsius 0x0022   033   052   000    Old_age 
Always   -   33 (0 15 0 0 0)
195 Hardware_ECC_Recovered  0x001a   027   008   000    Old_age 
Always   -   60157077
197 Current_Pending_Sector  0x0012   100   100   000    Old_age 
Always   -   0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age 
Offline  -   0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age 
Always   -   0


SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1    0    0  Not_testing
    2    0    0  Not_testing
    3    0    0  Not_testing
    4    0    0  Not_testing
    5    0 

Re: [PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread Eneko Lacunza

Hi,

So what disks/RAID controller are there on the server? :)

My guess is disk if failed :) Did you try smartctl ?

Also, I think attachments are stripped off :)

Cheers

El 22/8/19 a las 10:03, lord_Niedzwiedz escribió:

CPU usage 0.04% of 32 CPU(s)
_/*IO delay    20.38%        !!*/_
Load average    37.97,37.26,30.31
RAM usage    45.25% (56.93 GiB of 125.81 GiB)
KSM sharing    0 B
HD space(root)    0.53% (1.32 GiB of 247.29 GiB)
SWAP usage        N/A
CPU(s)        32 x AMD EPYC 7281 16-Core Processor (1 Socket)
Kernel Version        Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 
Jul 2019 10:51:57 +0200)

PVE Manager Version        pve-manager/6.0-4/2a719255

Proxmox working very slowly.
I stop all VM.

htop -    say nothing
iotop    -    say nothing


If i try command:
# sync
- shell waiting !! ;/


This same too:
root@tomas:~# pveperf
CPU BOGOMIPS:  134377.28
REGEX/SECOND:  2100393
HD SIZE:   247.29 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 531.28

^C^Z
[1]+  Stopped pveperf
root@tomas:~# ^C

_/*After this:*/__/*    IO delay         40%*/_


In a phisical console i heave:
INFO: task zwol:554 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task txg_quiesce:1007 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:27326 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:8930 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26963 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26967 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26972 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26974 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26976 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26980 blocked for more than 120 seconds.

At the restart on end i heave:
[  !!  ]  Froceibly rebooting: Ctrl-Alt-Del was pressed more than 7 
times within 2s
Systemd-shutdown[1]: Syncing filesystems and block devices - time out, 
issuing SIGKILL to PID 3940.

Started bpfilter
pvefw-logger [24351]: received terminate request (signal)
pvefw-logger [24351]: stopping pvefw logger

Server not stop/restart   ;-/
Any idea        ??!!

log file included.











___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Eneko Lacunza

Hi Dominik,

El 22/8/19 a las 9:50, Dominik Csapak escribió:


On 8/21/19 2:37 PM, Eneko Lacunza wrote:


# pveceph createosd /dev/sdb -db_dev /dev/sdd
device '/dev/sdd' is already in use and has no LVM on it



this sounds like a bug.. can you open one on bugzilla.proxmox.com,
while i investigate ?
we should be able to use a disk as db/wal even if there are only 
partitions on it.


https://bugzilla.proxmox.com/show_bug.cgi?id=2341

Not that I like bugs, but at least it seems there will be a fix, thanks 
a lot! :-)


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread lord_Niedzwiedz

CPU usage    0.04% of 32 CPU(s)
_/*IO delay    20.38%        !!*/_
Load average    37.97,37.26,30.31
RAM usage    45.25% (56.93 GiB of 125.81 GiB)
KSM sharing    0 B
HD space(root)    0.53% (1.32 GiB of 247.29 GiB)
SWAP usage        N/A
CPU(s)        32 x AMD EPYC 7281 16-Core Processor (1 Socket)
Kernel Version        Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 
Jul 2019 10:51:57 +0200)

PVE Manager Version        pve-manager/6.0-4/2a719255

Proxmox working very slowly.
I stop all VM.

htop -    say nothing
iotop    -    say nothing


If i try command:
# sync
- shell waiting !! ;/


This same too:
root@tomas:~# pveperf
CPU BOGOMIPS:  134377.28
REGEX/SECOND:  2100393
HD SIZE:   247.29 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 531.28

^C^Z
[1]+  Stopped pveperf
root@tomas:~# ^C

_/*After this:*/__/*    IO delay         40%*/_


In a phisical console i heave:
INFO: task zwol:554 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task txg_quiesce:1007 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:27326 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:8930 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26963 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26967 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26972 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26974 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26976 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26980 blocked for more than 120 seconds.

At the restart on end i heave:
[  !!  ]  Froceibly rebooting: Ctrl-Alt-Del was pressed more than 7 
times within 2s
Systemd-shutdown[1]: Syncing filesystems and block devices - time out, 
issuing SIGKILL to PID 3940.

Started bpfilter
pvefw-logger [24351]: received terminate request (signal)
pvefw-logger [24351]: stopping pvefw logger

Server not stop/restart   ;-/
Any idea        ??!!

log file included.











___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Dominik Csapak

Hi,

On 8/21/19 2:37 PM, Eneko Lacunza wrote:


# pveceph createosd /dev/sdb -db_dev /dev/sdd
device '/dev/sdd' is already in use and has no LVM on it



this sounds like a bug.. can you open one on bugzilla.proxmox.com,
while i investigate ?
we should be able to use a disk as db/wal even if there are only 
partitions on it.


thanks
Dominik

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user