Re: [PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread lord_Niedzwiedz
ATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:   0x00
Temperature:    27 Celsius
Available Spare:    100%
Available Spare Threshold:  10%
Percentage Used:    2%
Data Units Read:    10,105,558 [5.17 TB]
Data Units Written: 16,223,988 [8.30 TB]
Host Read Commands: 654,021,540
Host Write Commands:    594,078,253
Controller Busy Time:   930
Power Cycles:   96
Power On Hours: 1,540
Unsafe Shutdowns:   51
Media and Data Integrity Errors:    0
Error Information Log Entries:  0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:   27 Celsius
Temperature Sensor 2:   28 Celsius

Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged

root@tomas:/var/log# smartctl -a /dev//*nvme4n1*/
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-5.0.15-1-pve] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:   Samsung SSD 970 EVO 500GB
Serial Number:  S466NB0K630742Y
Firmware Version:   2B2QEXE7
PCI Vendor/Subsystem ID:    0x144d
IEEE OUI Identifier:    0x002538
Total NVM Capacity: 500,107,862,016 [500 GB]
Unallocated NVM Capacity:   0
Controller ID:  4
Number of Namespaces:   1
Namespace 1 Size/Capacity:  500,107,862,016 [500 GB]
Namespace 1 Utilization:    498,767,261,696 [498 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64:    002538 5681b09819
Local Time is:  Thu Aug 22 10:16:24 2019 CEST
Firmware Updates (0x16):    3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero 
Sav/Sel_Feat Timestmp

Maximum Data Transfer Size: 512 Pages
Warning  Comp. Temp. Threshold: 85 Celsius
Critical Comp. Temp. Threshold: 85 Celsius

Supported Power States
St Op Max   Active Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 + 6.20W   -    -    0  0  0  0    0   0
 1 + 4.30W   -    -    1  1  1  1    0   0
 2 + 2.10W   -    -    2  2  2  2    0   0
 3 -   0.0400W   -    -    3  3  3  3  210    1200
 4 -   0.0050W   -    -    4  4  4  4 2000    8000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 + 512   0 0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:   0x00
Temperature:    27 Celsius
Available Spare:    100%
Available Spare Threshold:  10%
Percentage Used:    2%
Data Units Read:    14,121,818 [7.23 TB]
Data Units Written: 15,364,291 [7.86 TB]
Host Read Commands: 668,618,811
Host Write Commands:    581,016,189
Controller Busy Time:   969
Power Cycles:   102
Power On Hours: 1,587
Unsafe Shutdowns:   56
Media and Data Integrity Errors:    0
Error Information Log Entries:  0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:   27 Celsius
Temperature Sensor 2:   27 Celsius

Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged


W dniu 22.08.2019 o 10:07, Eneko Lacunza pisze:

Hi,

So what disks/RAID controller are there on the server? :)

My guess is disk if failed :) Did you try smartctl ?

Also, I think attachments are stripped off :)

Cheers

El 22/8/19 a las 10:03, lord_Niedzwiedz escribió:

CPU usage 0.04% of 32 CPU(s)
_/*IO delay    20.38%        !!*/_
Load average    37.97,37.26,30.31
RAM usage    45.25% (56.93 GiB of 125.81 GiB)
KSM sharing    0 B
HD space(root)    0.53% (1.32 GiB of 247.29 GiB)
SWAP usage        N/A
CPU(s)        32 x AMD EPYC 7281 16-Core Processor (1 Socket)
Kernel Version        Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 
Jul 2019 10:51:57 +0200)

PVE Manager Version        pve-manager/6.0-4/2a719255

Proxmox working very slowly.
I stop all VM.

htop -    say nothing
iotop    -    say nothing


If i try command:
# sync
- shell waiting !! ;/


This same too:
root@tomas:~# pveperf
CPU BOGOMIPS:  134377.28
REGEX/SECOND:  2100393
HD SIZE:   247.29 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 531.28

^C^Z
[1]+  Stopped pveperf
root@tomas:~# ^C

_/*After this:*/__/*    IO delay         40%*/_


In a phisical consol

[PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread lord_Niedzwiedz

CPU usage    0.04% of 32 CPU(s)
_/*IO delay    20.38%        !!*/_
Load average    37.97,37.26,30.31
RAM usage    45.25% (56.93 GiB of 125.81 GiB)
KSM sharing    0 B
HD space(root)    0.53% (1.32 GiB of 247.29 GiB)
SWAP usage        N/A
CPU(s)        32 x AMD EPYC 7281 16-Core Processor (1 Socket)
Kernel Version        Linux 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 
Jul 2019 10:51:57 +0200)

PVE Manager Version        pve-manager/6.0-4/2a719255

Proxmox working very slowly.
I stop all VM.

htop -    say nothing
iotop    -    say nothing


If i try command:
# sync
- shell waiting !! ;/


This same too:
root@tomas:~# pveperf
CPU BOGOMIPS:  134377.28
REGEX/SECOND:  2100393
HD SIZE:   247.29 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 531.28

^C^Z
[1]+  Stopped pveperf
root@tomas:~# ^C

_/*After this:*/__/*    IO delay         40%*/_


In a phisical console i heave:
INFO: task zwol:554 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task txg_quiesce:1007 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:27326 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task kvm:8930 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26963 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26967 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26972 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26974 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26976 blocked for more than 120 seconds.
Tainted:    P    0    5.0.15-1-pve #1
"echo 0 > /prox/sys/kernel/hung_task_timeout_sec" disables this message.
INFO: task zvol:26980 blocked for more than 120 seconds.

At the restart on end i heave:
[  !!  ]  Froceibly rebooting: Ctrl-Alt-Del was pressed more than 7 
times within 2s
Systemd-shutdown[1]: Syncing filesystems and block devices - time out, 
issuing SIGKILL to PID 3940.

Started bpfilter
pvefw-logger [24351]: received terminate request (signal)
pvefw-logger [24351]: stopping pvefw logger

Server not stop/restart   ;-/
Any idea        ??!!

log file included.











___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox VE 6.0 - GUI problem

2019-08-19 Thread lord_Niedzwiedz

    This problem appears for the second time on the same machine.
I have problematic 8.1 windows there.
Once that they turn off themselves (on the VBox it wasn't)
Two that the server stopped responding by GUI yesterday.
"qm list" could do nothing.
He was hanging after the command.
I gave "init 6".
So long restarted.

And I have this symptom this morning.

root@tomas1:~# systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Mon 2019-08-19 08:05:08 CEST; 4min 
30s ago
  Process: 2999 ExecStart=/usr/bin/pveproxy start (code=exited, 
status=0/SUCCESS)

 Main PID: 3006 (pveproxy)
    Tasks: 4 (limit: 4915)
   Memory: 129.8M
   CGroup: /system.slice/pveproxy.service
   ├─3006 pveproxy
   ├─7396 pveproxy worker
   ├─7397 pveproxy worker
   └─7398 pveproxy worker

Aug 19 08:09:35 tomas1 pveproxy[7336]: worker exit
Aug 19 08:09:35 tomas1 pveproxy[7337]: worker exit
Aug 19 08:09:35 tomas1 pveproxy[3006]: worker 7336 finished
Aug 19 08:09:35 tomas1 pveproxy[3006]: starting 1 worker(s)
Aug 19 08:09:35 tomas1 pveproxy[3006]: worker 7397 started
Aug 19 08:09:35 tomas1 pveproxy[3006]: worker 7337 finished
Aug 19 08:09:35 tomas1 pveproxy[3006]: starting 1 worker(s)
Aug 19 08:09:35 tomas1 pveproxy[3006]: worker 7398 started
Aug 19 08:09:35 tomas1 pveproxy[7397]: /*unable to open log file 
'/var/log/pveproxy/access.log' - Permission denied*/
Aug 19 08:09:35 tomas1 pveproxy[7398]: /*unable to open log file 
'/var/log/pveproxy/access.log' - Permission denied*/


root@tomas1:~#
root@tomas1:~# ls -lh /var/log/pveproxy/access.log
-rw--- 1 root root 0 Aug 16 00:00 /var/log/pveproxy/access.log
root@tomas1:~# chmod 777 /var/log/pveproxy/access.log
root@tomas1:~# systemctl restart pveproxy

And I see gui !!  ;-/

No idea exception why so this.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ZFS - question

2019-08-13 Thread lord_Niedzwiedz

        Hello,

I heave 2 qestion about zfs.
In my four ssd disk RW speed is 550MB/s
But zfs matrix work only 700MB/s (for comparison mdadm 1400MB/s).
This same on nvme (m.2) disk.
Self m.2 2500MB/s.
In raid-z1 only shows 1500MB/s peer disk.

And qestion two  ;-D

I run a "Stop" backup on proxmox, it shuts down the machine.
Starts making a copy.
But it immediately turns it on ("restarts only", doesn't stop for the 
duration of the copy !! - why ??).

"resuming VM again after 21 seconds" ?? !! why like this ?

Is it better to make a Snapshot copy ?? (its qestion number three ;) )

cheers,
Gregory

W dniu 12.08.2019 o 11:29, Thomas Lamprecht pisze:

Am 8/6/19 um 3:57 PM schrieb Hervé Ballans:

Our OSDs are currently in 'filestore' backend. Does Nautilus handle this 
backend or do we have to migrate OSDs in 'Bluestore' ?

Nautlius can still handle Filestore.
But, we do not support adding new Filestore OSDs through our tooling
any more (you can still use ceph-volume directly, though) - just FYI.

cheers,
Thomas

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox instalation problem

2019-08-03 Thread lord_Niedzwiedz

Hello,

I have a problem with the Proxmox installation.
IBM 3650 (7979) 6 * SAS hdd server.
Proxmox only starts with RAID Hardware.
It does not start with RAID-Z and RAID1.
Error in the attachment

kind regards
Gregor
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread lord_Niedzwiedz

But i heave in directory binary files and command.
And this not working.

root@tom12:/usr/bin# ./ls
ls   lsblk    lsinitramfs  lslocks  lsmem lsns 
lspci    lsusb
lsattr   lscpu    lsipc    lslogins lsmod lsof 
lspgpot

root@tomas12:/usr/bin# ./ls
-bash: ./ls: No such file or directory
root@tom12:/usr/bin# /usr/bin/ls
-bash: /usr/bin/ls: No such file or directory

Why ??!!

W dniu 29.07.2019 o 11:24, lord_Niedzwiedz pisze:


Your system without /bin/ and /lib can't be usable, you need to 
totally recover it.

But why did rm erase them.
And the rest of the catalogs did not remove?

I have a working VM there that I can not clone or stop.
I personally prefer to install a new system and migrate the vm files 
(that you  have on local-zfs). But forget to use the GUI





- Il 29-lug-19, alle 11:08, lord_Niedzwiedz  
ha scritto:


        VM at local-zfs.
    But local-zfs not available with gui !!
    VM still work.

    And I see in:
    cd /mnt/pve/
    directory:
    nvme0n1 / nvme1n1 / sda /

    Here is one virtual.
    The rest on local-zfs (and they work, and I can not see the space).

    Proxmox it's still working.

    I lostmybe only :
    /bin
    /lib
    /lib64
    /sbin
    How is it possible that command:
    rm / *
    removed them ?? !!

    Without the -r option.

    And the rest of the catalogs did not delete ?? !!
    Maybe these were symbolic links?

    Gregor

    Where are located the VM's disks ? LVM ? ZFS ?
    Is possibile that you still have your disks (if LVM, for 
example), but i think that is better that you install a fresh Proxmox 
server, and move the disks from the old hard drive to the new one.
    You need some knowledge about linux, lvm, and you can save 
all your data.




    - Il 29-lug-19, alle 10:55, 
lord_niedzwiedzsir_misi...@o2.pl  ha scritto:


    I ran a command on the server by mistake:

    rm /*
    rm: cannot remove '/Backup': Is a directory
    rm: cannot remove '/boot': Is a directory
    rm: cannot remove '/dev': Is a directory
    rm: cannot remove '/etc': Is a directory
    rm: cannot remove '/home': Is a directory
    rm: cannot remove '/media': Is a directory
    rm: cannot remove '/mnt': Is a directory
    rm: cannot remove '/opt': Is a directory
    rm: cannot remove '/proc': Is a directory
    rm: cannot remove '/Roboczy': Is a directory
    rm: cannot remove '/root': Is a directory
    rm: cannot remove '/rpool': Is a directory
    rm: cannot remove '/run': Is a directory
    rm: cannot remove '/srv': Is a directory
    rm: cannot remove '/sys': Is a directory
    rm: cannot remove '/tmp': Is a directory
    rm: cannot remove '/usr': Is a directory
    rm: cannot remove '/var': Is a directory

    Strange machines work.
    I'm logged in gui.
    But I can not get to the machine VM.
    Do not execute any commands.
    What to do ??!!
 From what I see, I deleted my catalogs:
    / bin
    / lib
    / lib64
    / sbin
    WITH /.
    How is this possible ??!!
    I'm still logged in on one console after the shell, but I 
can not do any

    commandos.
    Even:
    qm
    -bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No 
such file or

    directory
    root@tomas:/usr/bin# ls
    -bash: /usr/bin/ls: No such file or directory
    root@tomas:/usr/bin# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

    Any Idea ??
    Please Help Me.

    Gregor

    ___
    pve-user mailing list
    pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread lord_Niedzwiedz


Your system without /bin/ and /lib can't be usable, you need to 
totally recover it.

But why did rm erase them.
And the rest of the catalogs did not remove?

I have a working VM there that I can not clone or stop.
I personally prefer to install a new system and migrate the vm files 
(that you  have on local-zfs). But forget to use the GUI





- Il 29-lug-19, alle 11:08, lord_Niedzwiedz  ha 
scritto:


    VM at local-zfs.
But local-zfs not available with gui !!
VM still work.

And I see in:
cd /mnt/pve/
directory:
nvme0n1 / nvme1n1 / sda /

Here is one virtual.
The rest on local-zfs (and they work, and I can not see the space).

Proxmox it's still working.

I lostmybe only :
/bin
/lib
/lib64
/sbin
How is it possible that command:
rm / *
removed them ?? !!

Without the -r option.

And the rest of the catalogs did not delete ?? !!
Maybe these were symbolic links?

Gregor

Where are located the VM's disks ? LVM ? ZFS ?
Is possibile that you still have your disks (if LVM, for example), but 
i think that is better that you install a fresh Proxmox server, and move the 
disks from the old hard drive to the new one.
You need some knowledge about linux, lvm, and you can save all your 
data.



- Il 29-lug-19, alle 10:55, lord_niedzwiedzsir_misi...@o2.pl  ha 
scritto:

I ran a command on the server by mistake:

rm /*
rm: cannot remove '/Backup': Is a directory
rm: cannot remove '/boot': Is a directory
rm: cannot remove '/dev': Is a directory
rm: cannot remove '/etc': Is a directory
rm: cannot remove '/home': Is a directory
rm: cannot remove '/media': Is a directory
rm: cannot remove '/mnt': Is a directory
rm: cannot remove '/opt': Is a directory
rm: cannot remove '/proc': Is a directory
rm: cannot remove '/Roboczy': Is a directory
rm: cannot remove '/root': Is a directory
rm: cannot remove '/rpool': Is a directory
rm: cannot remove '/run': Is a directory
rm: cannot remove '/srv': Is a directory
rm: cannot remove '/sys': Is a directory
rm: cannot remove '/tmp': Is a directory
rm: cannot remove '/usr': Is a directory
rm: cannot remove '/var': Is a directory

Strange machines work.
I'm logged in gui.
But I can not get to the machine VM.
Do not execute any commands.
What to do ??!!
 From what I see, I deleted my catalogs:
/ bin
/ lib
/ lib64
/ sbin
WITH /.
How is this possible ??!!
I'm still logged in on one console after the shell, but I can not 
do any
commandos.
Even:
qm
-bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No such file or
directory
root@tomas:/usr/bin# ls
-bash: /usr/bin/ls: No such file or directory
root@tomas:/usr/bin# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Any Idea ??
Please Help Me.

Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread lord_Niedzwiedz

        VM at local-zfs.
But local-zfs not available with gui !!
VM still work.

And I see in:
cd /mnt/pve/
directory:
nvme0n1 / nvme1n1 / sda /

Here is one virtual.
The rest on local-zfs (and they work, and I can not see the space).

Proxmox it's still working.

I lostmybe only :
/bin
/lib
/lib64
/sbin
How is it possible that command:
rm / *
removed them ?? !!

Without the -r option.

And the rest of the catalogs did not delete ?? !!
Maybe these were symbolic links?

Gregor

Where are located the VM's disks ? LVM ? ZFS ?
Is possibile that you still have your disks (if LVM, for example), but i think 
that is better that you install a fresh Proxmox server, and move the disks from 
the old hard drive to the new one.
You need some knowledge about linux, lvm, and you can save all your data.



- Il 29-lug-19, alle 10:55, lord_Niedzwiedz sir_misi...@o2.pl ha scritto:


I ran a command on the server by mistake:

rm /*
rm: cannot remove '/Backup': Is a directory
rm: cannot remove '/boot': Is a directory
rm: cannot remove '/dev': Is a directory
rm: cannot remove '/etc': Is a directory
rm: cannot remove '/home': Is a directory
rm: cannot remove '/media': Is a directory
rm: cannot remove '/mnt': Is a directory
rm: cannot remove '/opt': Is a directory
rm: cannot remove '/proc': Is a directory
rm: cannot remove '/Roboczy': Is a directory
rm: cannot remove '/root': Is a directory
rm: cannot remove '/rpool': Is a directory
rm: cannot remove '/run': Is a directory
rm: cannot remove '/srv': Is a directory
rm: cannot remove '/sys': Is a directory
rm: cannot remove '/tmp': Is a directory
rm: cannot remove '/usr': Is a directory
rm: cannot remove '/var': Is a directory

Strange machines work.
I'm logged in gui.
But I can not get to the machine VM.
Do not execute any commands.
What to do ??!!
 From what I see, I deleted my catalogs:
/ bin
/ lib
/ lib64
/ sbin
WITH /.
How is this possible ??!!
I'm still logged in on one console after the shell, but I can not do any
commandos.
Even:
qm
-bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No such file or
directory
root@tomas:/usr/bin# ls
-bash: /usr/bin/ls: No such file or directory
root@tomas:/usr/bin# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Any Idea ??
Please Help Me.

Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread lord_Niedzwiedz

I ran a command on the server by mistake:

rm /*
rm: cannot remove '/Backup': Is a directory
rm: cannot remove '/boot': Is a directory
rm: cannot remove '/dev': Is a directory
rm: cannot remove '/etc': Is a directory
rm: cannot remove '/home': Is a directory
rm: cannot remove '/media': Is a directory
rm: cannot remove '/mnt': Is a directory
rm: cannot remove '/opt': Is a directory
rm: cannot remove '/proc': Is a directory
rm: cannot remove '/Roboczy': Is a directory
rm: cannot remove '/root': Is a directory
rm: cannot remove '/rpool': Is a directory
rm: cannot remove '/run': Is a directory
rm: cannot remove '/srv': Is a directory
rm: cannot remove '/sys': Is a directory
rm: cannot remove '/tmp': Is a directory
rm: cannot remove '/usr': Is a directory
rm: cannot remove '/var': Is a directory

Strange machines work.
I'm logged in gui.
But I can not get to the machine VM.
Do not execute any commands.
What to do ??!!
From what I see, I deleted my catalogs:
/ bin
/ lib
/ lib64
/ sbin
WITH /.
How is this possible ??!!
I'm still logged in on one console after the shell, but I can not do any 
commandos.

Even:
qm
-bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No such file or 
directory

root@tomas:/usr/bin# ls
-bash: /usr/bin/ls: No such file or directory
root@tomas:/usr/bin# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Any Idea ??
Please Help Me.

Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox installation problem

2019-07-24 Thread lord_Niedzwiedz

Option A) I dont see any grub file ;-/
And command /usr/sbin/update-grub

B) I dont see in system command update-initramfs

W dniu 24.07.2019 o 12:49, Dmitry Petuhov pisze:
Try to look at 
https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Boot_fails_and_goes_into_busybox


These parameters give linux kernel time to detect hard disks present 
in system and find ZFS on them.



24.07.2019 13:39, lord_Niedzwiedz пишет:

Hello,

I have a problem with the Proxmox installation.
IBM 3650 (7979) 6 * SAS hdd server.
Proxmox only starts with RAID Hardware.
It does not start with RAID-Z and RAID1.
Error in the: https://help.komandor.pl/aaa.jpg

kind regards
Gregor
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] proxmox installation problem

2019-07-24 Thread lord_Niedzwiedz

Hello,

I have a problem with the Proxmox installation.
IBM 3650 (7979) 6 * SAS hdd server.
Proxmox only starts with RAID Hardware.
It does not start with RAID-Z and RAID1.
Error in the: https://help.komandor.pl/aaa.jpg

kind regards
Gregor
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] rc.local - problem

2019-05-16 Thread lord_Niedzwiedz



sudo vi /etc/systemd/system/rc-local.service

[Unit]
 Description=/etc/rc.local Compatibility
 ConditionPathExists=/etc/rc.local

[Service]
 Type=forking
 ExecStart=/etc/rc.local start
 TimeoutSec=0
 StandardOutput=tty
 RemainAfterExit=yes
 SysVStartPriority=99

[Install]
 WantedBy=multi-user.target

Save and close the file. Make sure /etc/rc.local file is executable.

sudo chmod +x /etc/rc.local

After that, enable the service on system boot:

sudo systemctl enable rc-local

Now start the service and check its status:

sudo systemctl start rc-local.service


And I taked:

systemctl start rc-local.service
Job for rc-local.service failed because the control process exited 
with error code.

See "systemctl status rc-local.service" and "journalctl -xe" for details.
root@louve:~# systemctl status rc-local.service
● rc-local.service - /etc/rc.local Compatibility
   Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; 
vendor preset: enabled)

  Drop-In: /lib/systemd/system/rc-local.service.d
   └─debian.conf
   Active: failed (Result: exit-code) since Thu 2019-05-16 12:26:04 
CEST; 8s ago
  Process: 4738 ExecStart=/etc/rc.local start (code=exited, 
status=203/EXEC)

  CPU: 829us

May 16 12:26:04 louve systemd[1]: Starting /etc/rc.local Compatibility...
May 16 12:26:04 louve systemd[1]: rc-local.service: Control process 
exited, code=exited status=203
May 16 12:26:04 louve systemd[1]: Failed to start /etc/rc.local 
Compatibility.
May 16 12:26:04 louve systemd[1]: rc-local.service: Unit entered 
failed state.
May 16 12:26:04 louve systemd[1]: rc-local.service: Failed with result 
'exit-code'.



Ok, I not heave
exit 0 on end file   ;-)
Problem resolved ;)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] rc.local - problem

2019-05-16 Thread lord_Niedzwiedz

sudo vi /etc/systemd/system/rc-local.service

[Unit]
 Description=/etc/rc.local Compatibility
 ConditionPathExists=/etc/rc.local

[Service]
 Type=forking
 ExecStart=/etc/rc.local start
 TimeoutSec=0
 StandardOutput=tty
 RemainAfterExit=yes
 SysVStartPriority=99

[Install]
 WantedBy=multi-user.target

Save and close the file. Make sure /etc/rc.local file is executable.

sudo chmod +x /etc/rc.local

After that, enable the service on system boot:

sudo systemctl enable rc-local

Now start the service and check its status:

sudo systemctl start rc-local.service


And I taked:

systemctl start rc-local.service
Job for rc-local.service failed because the control process exited with 
error code.

See "systemctl status rc-local.service" and "journalctl -xe" for details.
root@louve:~# systemctl status rc-local.service
● rc-local.service - /etc/rc.local Compatibility
   Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; 
vendor preset: enabled)

  Drop-In: /lib/systemd/system/rc-local.service.d
   └─debian.conf
   Active: failed (Result: exit-code) since Thu 2019-05-16 12:26:04 
CEST; 8s ago
  Process: 4738 ExecStart=/etc/rc.local start (code=exited, 
status=203/EXEC)

  CPU: 829us

May 16 12:26:04 louve systemd[1]: Starting /etc/rc.local Compatibility...
May 16 12:26:04 louve systemd[1]: rc-local.service: Control process 
exited, code=exited status=203
May 16 12:26:04 louve systemd[1]: Failed to start /etc/rc.local 
Compatibility.
May 16 12:26:04 louve systemd[1]: rc-local.service: Unit entered failed 
state.
May 16 12:26:04 louve systemd[1]: rc-local.service: Failed with result 
'exit-code'.


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] why ;-]

2019-03-26 Thread lord_Niedzwiedz

root@ave:~# apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
  zfs-initramfs zfsutils-linux
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.


Proxmox, why ? ;)

Why ??!!  ;-D

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UEFI - Proxmox

2019-03-25 Thread lord_Niedzwiedz

I start Proxmox on UEFI in disk /dev/sda on LVM.
Its start correctly.

But i try this same on nvme and not work  ;-/

I've noticed that if your system boots in UEFI mode and you choose ZFS
during installation, Proxmox installs ZFS and an UEFI boot partition. This
does then not boot afterwards, without warnings of errors.

I believe that Proxmox does not support ZFS in combination with UEFI. You
have to use ext4/lvm instead, or boot in BIOS mode.

Hope this helps, Arjen

On Thu, Mar 21, 2019, 16:16 Alain Péan  wrote:


Le 21/03/2019 à 09:16, lord_Niedzwiedz a écrit :

A heave only UEFI bios in serwer.

How I make start Proxmox ?

I install it, but Proxmox not start  ;-/

Any ideas ?

I am using UEFI with newer servers. It works without problem. For
example, I have the boot/efi partition :
# cat /etc/fstab
#  
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=CF7B-601B /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

With usual servers (I have Dell ones), you have the choice in the BIOS
between lagacy BIOS or UEFI. Just choose UEFI, then boot with the last
proxmox ve iso. If you don't have the choice, only UEEFI, just boot the
iso and proceed with the installation, which is largely automated.

Alain

--
Administrateur Système/Réseau
C2N Centre de Nanosciences et Nanotechnologies (UMR 9001)
Boulevard Thomas Gobert (ex Avenue de La Vauve), 91920 Palaiseau
Tel : 01-70-27-06-88 Bureau A255

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UEFI - Proxmox

2019-03-25 Thread lord_Niedzwiedz



A heave only UEFI bios in serwer.

How I make start Proxmox ?

I install it, but Proxmox not start  ;-/

Any ideas ? 


I am using UEFI with newer servers. It works without problem. For 
example, I have the boot/efi partition :

# cat /etc/fstab
#  
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=CF7B-601B /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

With usual servers (I have Dell ones), you have the choice in the BIOS 
between lagacy BIOS or UEFI. Just choose UEFI, then boot with the last 
proxmox ve iso. If you don't have the choice, only UEEFI, just boot 
the iso and proceed with the installation, which is largely automated.


Alain

When I install Proxmox i heave partition like this:
Device                 Start    End                   Sectors     
        Size         Type

/dev/nvme0n1p1   34  2047          2014 1007K        BIOS - boot
/dev/nvme0n1p2 2048   1050623         1048576            512M   
System EFI

/dev/nvme0n1p3  1050624 976773134     975722511       465,3G Linux LVM

But the system does not start
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] UEFI with Proxmox

2019-03-21 Thread lord_Niedzwiedz

        Hi,
A heave only UEFI bios in serwer.

How I make start Proxmox ?

I install it, but Proxmox not start  ;-/

Any ideas ?

Regards
Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] UEFI - Proxmox

2019-03-21 Thread lord_Niedzwiedz

        Hi,
A heave only UEFI bios in serwer.

How I make start Proxmox ?

I install it, but Proxmox not start  ;-/

Any ideas ?

Regards
Gregor
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] 5.2 installer fails randomly

2019-02-28 Thread lord_Niedzwiedz



Hello,

I've been installing Proxmox VE 5.2.1 from CD on IBM/Lenovo x3650 M3 
with default settings.
The server was running the same version of Proxmox before and then 
briefly ESXi 6.7.


On the first attempt installer failed with:

Unable to install boot loader

(...)
Installing for i386-pc platform.
Installation finished. No errors reported.
Installing for x86_64-efi platform.
File descriptor 4 (/dev/sda2) leaked on vgs invocation. Parent PID 
26317: /usr/sbin/grub-install
File descriptor 4 (/dev/sda2) leaked on vgs invocation. Parent PID 
26317: /usr/sbin/grub-install

Could not delete variable: No such file or directory
/usr/sbin/grub-install: error: efibootmgr failed to register the boot 
entry: Block device required.

unable to install the EFI boot loader on '/dev/sda'
umount: /target: target is busy
(in some cases useful info about processes that use the device is 
found by lsof(8) or fuser(1).)


Interestingly when I reboot and rerun the installer with the same 
settings it installs fine.


Can anybody explain it?

Thanks,
Adam


I have randomly this same on nvme (m.2) disks.
mybe erase the disks.
Or try adds mbr and/or partition befor install Proxmox
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Join cluster first time - problem

2019-01-23 Thread lord_Niedzwiedz

Ok, when I added in node2 this:
lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images
I see local-vm in node2   ;-)
But i see to local-zfs (not active)
"could not activate storage 'local-zfs', zfs error: cannot import 
'rpool': no such pool available (500)"


I w drugą stronę.
Widzę na węźle 1 local-lvm też nie aktywne.

Sorry for spamming on the group (my first cluster ;-D)
I will not be anymore  ;-)

W dniu 23.01.2019 o 16:29, lord_Niedzwiedz pisze:
"WARNING: Adding a node to the cluster will delete it's current 
/etc/pve/storage.cfg. If you have VMs stored on the node, be prepared 
to add back your storage locations if necessary. Even though the 
storage locations disappear from the GUI, your data is still there."

I add node to cluster.
But lost local-zfs.
Now i heave only "local" and nothing in added node.
"could not activate storage 'local-zfs', zfs error: cannot import 
'rpool': no such pool available (500)"

How i make recover this.

At node 1 I had local-zfs.
On node 2  (including added) i had local-lvm
I do not see the latter, but I have changed the configuration in 
/etc/pve/storages.cfd for node2 from local-lvm to local-zfs  ;-/

Regards ;)
Gregory

Hi,

Seems you have VMs on host2. Please read:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Adding_nodes_to_the_Cluster 



Cheers

El 23/1/19 a las 15:37, lord_Niedzwiedz escribió:

        I do it first time.

I create cluster ok on host1:

pvecm create klaster1
pvecm status

And on host2 i try:

pvecm add toms.komndr.pl

detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster failed!

what i do wrong ??


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Join cluster first time - problem

2019-01-23 Thread lord_Niedzwiedz
"WARNING: Adding a node to the cluster will delete it's current 
/etc/pve/storage.cfg. If you have VMs stored on the node, be prepared to 
add back your storage locations if necessary. Even though the storage 
locations disappear from the GUI, your data is still there."

I add node to cluster.
But lost local-zfs.
Now i heave only "local" and nothing in added node.
"could not activate storage 'local-zfs', zfs error: cannot import 
'rpool': no such pool available (500)"

How i make recover this.

At node 1 I had local-zfs.
On node 2  (including added) i had local-lvm
I do not see the latter, but I have changed the configuration in 
/etc/pve/storages.cfd for node2 from local-lvm to local-zfs  ;-/

Regards ;)
Gregory

Hi,

Seems you have VMs on host2. Please read:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Adding_nodes_to_the_Cluster 



Cheers

El 23/1/19 a las 15:37, lord_Niedzwiedz escribió:

        I do it first time.

I create cluster ok on host1:

pvecm create klaster1
pvecm status

And on host2 i try:

pvecm add toms.komndr.pl

detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster failed!

what i do wrong ??


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Join cluster first time - problem

2019-01-23 Thread lord_Niedzwiedz

    Hi,

OK, thank you

I thought so, I'm doing a test cluster on 2 test machines, for a moment 
and remove, the first time.

I'm learning.

I've created.
On host2 (the one he adds) can not be CT ora VM; /

Regards ;)
Gregory

Hi,

Seems you have VMs on host2. Please read:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Adding_nodes_to_the_Cluster 



Cheers

El 23/1/19 a las 15:37, lord_Niedzwiedz escribió:

        I do it first time.

I create cluster ok on host1:

pvecm create klaster1
pvecm status

And on host2 i try:

pvecm add toms.komndr.pl

detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster failed!

what i do wrong ??


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Join cluster first time - problem

2019-01-23 Thread lord_Niedzwiedz

        I do it first time.

I create cluster ok on host1:

pvecm create klaster1
pvecm status

And on host2 i try:

pvecm add tomas.komandor.pl

detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster failed!

what i do wrong ??


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] aic94xx - problem

2019-01-23 Thread lord_Niedzwiedz



I heave only apt install pve-kernel-4.10...
Where is oldies repository ?
Question is, how i make downgrade and install in ProxMox old kernel 
4.4.x. ??!!

Command ?


apt-cache search pve-kernel

-> do you find what you need? Then:

apt-get install pve-kernel-4.4-...


I dont heave in repos pve-kernel-4.4-
The minimal is pve-kernel-4.10.1-2-pve

I must install proxmox 4.4
Or how I make get this apt repos ?


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] aic94xx - problem

2019-01-23 Thread lord_Niedzwiedz

I heave only apt install pve-kernel-4.10...
Where is oldies repository ?
Question is, how i make downgrade and install in ProxMox old kernel 
4.4.x. ??!!

Command ?

https://bugzilla.redhat.com/show_bug.cgi?id=1443678
https://bugzilla.kernel.org/show_bug.cgi?id=201609


On Jan 22, 2019, at 17:24, lord_Niedzwiedz  wrote:

modprobe aic94xx


[  600.202300] aic94xx: Adaptec aic94xx SAS/SATA driver version 
1.0.3 loaded
[  600.202600] aic94xx :03:04.0: PCI IRQ 19 -> rerouted to 
legacy IRQ 19
[  600.203628] aic94xx: found Adaptec AIC-9405W SAS/SATA Host 
Adapter, device :03:04.0

[  600.203634] scsi host2: aic94xx
[  600.234870] aic94xx: Found sequencer Firmware version 1.1 (V17/10c6)
[  600.277468] aic94xx: device :03:04.0: SAS addr 
5005076a0144bd00, PCBA SN ORG, 4 phys, 4 enabled phys, flash 
present, BIOS build 1549

[  600.277488] [ cut here ]
[  600.277490] sysfs: cannot create duplicate filename 
'/devices/pci:00/:00:1c.0/:02:00.0/:03:04.0/revision'
[  600.277511] WARNING: CPU: 1 PID: 2281 at fs/sysfs/dir.c:31 
sysfs_warn_dup+0x56/0x70
[  600.277513] Modules linked in: aic94xx(+) ip_set ip6table_filter 
ip6_tables iptable_filter softdog nfnetlink_log nfnetlink 
dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c 
gpio_ich radeon ttm snd_pcm drm_kms_helper intel_powerclamp 
input_leds snd_timer drm snd soundcore lpc_ich ipmi_si ipmi_devintf 
i2c_algo_bit fb_sys_fops pcspkr serio_raw ipmi_msghandler 
syscopyarea sysfillrect sysimgblt i3000_edac shpchp mac_hid zfs(PO) 
zunicode(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) 
vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp 
libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc ip_tables x_tables 
autofs4 btrfs xor zstd_compress raid6_pq psmouse pata_acpi i2c_i801 
libsas scsi_transport_sas e1000 tg3 ptp pps_core [last unloaded: 
aic94xx]
[  600.277595] CPU: 1 PID: 2281 Comm: modprobe Tainted: P    W 
O 4.15.17-1-pve #1
[  600.277596] Hardware name: IBM IBM eServer 306m 
-[8491E6Y]-/M11ip/M11ix, BIOS IBM BIOS Version 
1.29-[PAE129AUS-1.29]- 02/09/2006

[  600.277600] RIP: 0010:sysfs_warn_dup+0x56/0x70
[  600.277602] RSP: 0018:ad21c338f9d0 EFLAGS: 00010282
[  600.277605] RAX:  RBX: 950033c18000 RCX: 
0006
[  600.277606] RDX: 0007 RSI: 0096 RDI: 
95003fd16490
[  600.277608] RBP: ad21c338f9e8 R08: 0001 R09: 
0384
[  600.277609] R10: 0001 R11: 0384 R12: 
c01a07c4
[  600.277611] R13: 950038c16908 R14: 950033314000 R15: 
0004
[  600.277613] FS:  7f9e245bb700() GS:95003fd0() 
knlGS:

[  600.277615] CS:  0010 DS:  ES:  CR0: 80050033
[  600.277617] CR2: 7fff0f58fff8 CR3: 0001731b4000 CR4: 
06e0

[  600.277619] Call Trace:
[  600.277628]  sysfs_add_file_mode_ns+0x116/0x170
[  600.277631]  sysfs_create_file_ns+0x2a/0x30
[  600.277635]  device_create_file+0x42/0x80
[  600.277643]  asd_pci_probe+0x91b/0xc10 [aic94xx]
[  600.277647]  local_pci_probe+0x4a/0xa0
[  600.277650]  pci_device_probe+0x109/0x1b0
[  600.277654]  driver_probe_device+0x2ba/0x4a0
[  600.277657]  __driver_attach+0xe2/0xf0
[  600.277660]  ? driver_probe_device+0x4a0/0x4a0
[  600.277663]  bus_for_each_dev+0x72/0xc0
[  600.277666]  driver_attach+0x1e/0x20
[  600.277668]  bus_add_driver+0x170/0x260
[  600.277671]  driver_register+0x60/0xe0
[  600.277675]  ? 0xc09a1000
[  600.277677]  __pci_register_driver+0x5a/0x60
[  600.277684]  aic94xx_init+0xf8/0x1000 [aic94xx]
[  600.277686]  ? 0xc09a1000
[  600.277689]  do_one_initcall+0x55/0x1ab
[  600.277693]  ? _cond_resched+0x1a/0x50
[  600.277697]  ? kmem_cache_alloc_trace+0x108/0x1b0
[  600.277700]  ? do_init_module+0x27/0x219
[  600.277703]  do_init_module+0x5f/0x219
[  600.277706]  load_module+0x28e6/0x2e00
[  600.277710]  ? ima_post_read_file+0x83/0xa0
[  600.277714]  SYSC_finit_module+0xe5/0x120
[  600.277717]  ? SYSC_finit_module+0xe5/0x120
[  600.277720]  SyS_finit_module+0xe/0x10
[  600.277723]  do_syscall_64+0x73/0x130
[  600.277726]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[  600.277728] RIP: 0033:0x7f9e240eb229
[  600.277730] RSP: 002b:7ffc26fa9c48 EFLAGS: 0246 ORIG_RAX: 
0139
[  600.277733] RAX: ffda RBX: 55cec3fcf450 RCX: 
7f9e240eb229
[  600.277734] RDX:  RSI: 55cec2823638 RDI: 
0003
[  600.277736] RBP: 55cec2823638 R08:  R09: 

[  600.277737] R10: 0003 R11: 0246 R12: 

[  600.277739] R13: 55cec3fd1230 R14: 0004 R15: 

[  600.277741] Code: 85 c0 48 89 c3 74 12 b9 00 10 00 00 48 89 c2 31 
f6 4c 89 ef e8 0c c7 ff ff 4c 89 e2 48 89 de 48 c7 c7 18 72 2f 86 e8 
ca 7f d8 ff <0f> 0b 48 89 df e8 90 0f f4 ff 5b 41 5c 41 5d 5d c3 66 
0f 1f 84

[  6

Re: [PVE-User] aic94xx - problem

2019-01-23 Thread lord_Niedzwiedz
Question is, how i make downgrade and install in ProxMox old kernel 
4.4.x.  ??!!

Command ?

https://bugzilla.redhat.com/show_bug.cgi?id=1443678
https://bugzilla.kernel.org/show_bug.cgi?id=201609


On Jan 22, 2019, at 17:24, lord_Niedzwiedz  wrote:

modprobe aic94xx


[  600.202300] aic94xx: Adaptec aic94xx SAS/SATA driver version 1.0.3 loaded
[  600.202600] aic94xx :03:04.0: PCI IRQ 19 -> rerouted to legacy IRQ 19
[  600.203628] aic94xx: found Adaptec AIC-9405W SAS/SATA Host Adapter, device 
:03:04.0
[  600.203634] scsi host2: aic94xx
[  600.234870] aic94xx: Found sequencer Firmware version 1.1 (V17/10c6)
[  600.277468] aic94xx: device :03:04.0: SAS addr 5005076a0144bd00, PCBA SN 
ORG, 4 phys, 4 enabled phys, flash present, BIOS build 1549
[  600.277488] [ cut here ]
[  600.277490] sysfs: cannot create duplicate filename 
'/devices/pci:00/:00:1c.0/:02:00.0/:03:04.0/revision'
[  600.277511] WARNING: CPU: 1 PID: 2281 at fs/sysfs/dir.c:31 
sysfs_warn_dup+0x56/0x70
[  600.277513] Modules linked in: aic94xx(+) ip_set ip6table_filter ip6_tables 
iptable_filter softdog nfnetlink_log nfnetlink dm_thin_pool dm_persistent_data 
dm_bio_prison dm_bufio libcrc32c gpio_ich radeon ttm snd_pcm drm_kms_helper 
intel_powerclamp input_leds snd_timer drm snd soundcore lpc_ich ipmi_si 
ipmi_devintf i2c_algo_bit fb_sys_fops pcspkr serio_raw ipmi_msghandler 
syscopyarea sysfillrect sysimgblt i3000_edac shpchp mac_hid zfs(PO) 
zunicode(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost 
tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi 
scsi_transport_iscsi sunrpc ip_tables x_tables autofs4 btrfs xor zstd_compress 
raid6_pq psmouse pata_acpi i2c_i801 libsas scsi_transport_sas e1000 tg3 ptp 
pps_core [last unloaded: aic94xx]
[  600.277595] CPU: 1 PID: 2281 Comm: modprobe Tainted: PW O 
4.15.17-1-pve #1
[  600.277596] Hardware name: IBM IBM eServer 306m -[8491E6Y]-/M11ip/M11ix, 
BIOS IBM BIOS Version 1.29-[PAE129AUS-1.29]- 02/09/2006
[  600.277600] RIP: 0010:sysfs_warn_dup+0x56/0x70
[  600.277602] RSP: 0018:ad21c338f9d0 EFLAGS: 00010282
[  600.277605] RAX:  RBX: 950033c18000 RCX: 0006
[  600.277606] RDX: 0007 RSI: 0096 RDI: 95003fd16490
[  600.277608] RBP: ad21c338f9e8 R08: 0001 R09: 0384
[  600.277609] R10: 0001 R11: 0384 R12: c01a07c4
[  600.277611] R13: 950038c16908 R14: 950033314000 R15: 0004
[  600.277613] FS:  7f9e245bb700() GS:95003fd0() 
knlGS:
[  600.277615] CS:  0010 DS:  ES:  CR0: 80050033
[  600.277617] CR2: 7fff0f58fff8 CR3: 0001731b4000 CR4: 06e0
[  600.277619] Call Trace:
[  600.277628]  sysfs_add_file_mode_ns+0x116/0x170
[  600.277631]  sysfs_create_file_ns+0x2a/0x30
[  600.277635]  device_create_file+0x42/0x80
[  600.277643]  asd_pci_probe+0x91b/0xc10 [aic94xx]
[  600.277647]  local_pci_probe+0x4a/0xa0
[  600.277650]  pci_device_probe+0x109/0x1b0
[  600.277654]  driver_probe_device+0x2ba/0x4a0
[  600.277657]  __driver_attach+0xe2/0xf0
[  600.277660]  ? driver_probe_device+0x4a0/0x4a0
[  600.277663]  bus_for_each_dev+0x72/0xc0
[  600.277666]  driver_attach+0x1e/0x20
[  600.277668]  bus_add_driver+0x170/0x260
[  600.277671]  driver_register+0x60/0xe0
[  600.277675]  ? 0xc09a1000
[  600.277677]  __pci_register_driver+0x5a/0x60
[  600.277684]  aic94xx_init+0xf8/0x1000 [aic94xx]
[  600.277686]  ? 0xc09a1000
[  600.277689]  do_one_initcall+0x55/0x1ab
[  600.277693]  ? _cond_resched+0x1a/0x50
[  600.277697]  ? kmem_cache_alloc_trace+0x108/0x1b0
[  600.277700]  ? do_init_module+0x27/0x219
[  600.277703]  do_init_module+0x5f/0x219
[  600.277706]  load_module+0x28e6/0x2e00
[  600.277710]  ? ima_post_read_file+0x83/0xa0
[  600.277714]  SYSC_finit_module+0xe5/0x120
[  600.277717]  ? SYSC_finit_module+0xe5/0x120
[  600.277720]  SyS_finit_module+0xe/0x10
[  600.277723]  do_syscall_64+0x73/0x130
[  600.277726]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[  600.277728] RIP: 0033:0x7f9e240eb229
[  600.277730] RSP: 002b:7ffc26fa9c48 EFLAGS: 0246 ORIG_RAX: 
0139
[  600.277733] RAX: ffda RBX: 55cec3fcf450 RCX: 7f9e240eb229
[  600.277734] RDX:  RSI: 55cec2823638 RDI: 0003
[  600.277736] RBP: 55cec2823638 R08:  R09: 
[  600.277737] R10: 0003 R11: 0246 R12: 
[  600.277739] R13: 55cec3fd1230 R14: 0004 R15: 
[  600.277741] Code: 85 c0 48 89 c3 74 12 b9 00 10 00 00 48 89 c2 31 f6 4c 89 ef e8 
0c c7 ff ff 4c 89 e2 48 89 de 48 c7 c7 18 72 2f 86 e8 ca 7f d8 ff <0f> 0b 48 89 
df e8 90 0f f4 ff 5b 41 5c 41 5d 5d c3 66 0f 1f 84
[  600.277798] ---[ end trace e693b63cde4c2a43 ]---
[  600.293683] aic94xx :03:04.0: PCI IRQ 19 -&

[PVE-User] aic94xx - problem

2019-01-22 Thread lord_Niedzwiedz

modprobe aic94xx


[  600.202300] aic94xx: Adaptec aic94xx SAS/SATA driver version 1.0.3 loaded
[  600.202600] aic94xx :03:04.0: PCI IRQ 19 -> rerouted to legacy IRQ 19
[  600.203628] aic94xx: found Adaptec AIC-9405W SAS/SATA Host Adapter, 
device :03:04.0

[  600.203634] scsi host2: aic94xx
[  600.234870] aic94xx: Found sequencer Firmware version 1.1 (V17/10c6)
[  600.277468] aic94xx: device :03:04.0: SAS addr 5005076a0144bd00, 
PCBA SN ORG, 4 phys, 4 enabled phys, flash present, BIOS build 1549

[  600.277488] [ cut here ]
[  600.277490] sysfs: cannot create duplicate filename 
'/devices/pci:00/:00:1c.0/:02:00.0/:03:04.0/revision'
[  600.277511] WARNING: CPU: 1 PID: 2281 at fs/sysfs/dir.c:31 
sysfs_warn_dup+0x56/0x70
[  600.277513] Modules linked in: aic94xx(+) ip_set ip6table_filter 
ip6_tables iptable_filter softdog nfnetlink_log nfnetlink dm_thin_pool 
dm_persistent_data dm_bio_prison dm_bufio libcrc32c gpio_ich radeon ttm 
snd_pcm drm_kms_helper intel_powerclamp input_leds snd_timer drm snd 
soundcore lpc_ich ipmi_si ipmi_devintf i2c_algo_bit fb_sys_fops pcspkr 
serio_raw ipmi_msghandler syscopyarea sysfillrect sysimgblt i3000_edac 
shpchp mac_hid zfs(PO) zunicode(PO) zavl(PO) icp(PO) zcommon(PO) 
znvpair(PO) spl(O) vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm 
ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc 
ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq psmouse 
pata_acpi i2c_i801 libsas scsi_transport_sas e1000 tg3 ptp pps_core 
[last unloaded: aic94xx]
[  600.277595] CPU: 1 PID: 2281 Comm: modprobe Tainted: P    W O 
4.15.17-1-pve #1
[  600.277596] Hardware name: IBM IBM eServer 306m 
-[8491E6Y]-/M11ip/M11ix, BIOS IBM BIOS Version 1.29-[PAE129AUS-1.29]- 
02/09/2006

[  600.277600] RIP: 0010:sysfs_warn_dup+0x56/0x70
[  600.277602] RSP: 0018:ad21c338f9d0 EFLAGS: 00010282
[  600.277605] RAX:  RBX: 950033c18000 RCX: 
0006
[  600.277606] RDX: 0007 RSI: 0096 RDI: 
95003fd16490
[  600.277608] RBP: ad21c338f9e8 R08: 0001 R09: 
0384
[  600.277609] R10: 0001 R11: 0384 R12: 
c01a07c4
[  600.277611] R13: 950038c16908 R14: 950033314000 R15: 
0004
[  600.277613] FS:  7f9e245bb700() GS:95003fd0() 
knlGS:

[  600.277615] CS:  0010 DS:  ES:  CR0: 80050033
[  600.277617] CR2: 7fff0f58fff8 CR3: 0001731b4000 CR4: 
06e0

[  600.277619] Call Trace:
[  600.277628]  sysfs_add_file_mode_ns+0x116/0x170
[  600.277631]  sysfs_create_file_ns+0x2a/0x30
[  600.277635]  device_create_file+0x42/0x80
[  600.277643]  asd_pci_probe+0x91b/0xc10 [aic94xx]
[  600.277647]  local_pci_probe+0x4a/0xa0
[  600.277650]  pci_device_probe+0x109/0x1b0
[  600.277654]  driver_probe_device+0x2ba/0x4a0
[  600.277657]  __driver_attach+0xe2/0xf0
[  600.277660]  ? driver_probe_device+0x4a0/0x4a0
[  600.277663]  bus_for_each_dev+0x72/0xc0
[  600.277666]  driver_attach+0x1e/0x20
[  600.277668]  bus_add_driver+0x170/0x260
[  600.277671]  driver_register+0x60/0xe0
[  600.277675]  ? 0xc09a1000
[  600.277677]  __pci_register_driver+0x5a/0x60
[  600.277684]  aic94xx_init+0xf8/0x1000 [aic94xx]
[  600.277686]  ? 0xc09a1000
[  600.277689]  do_one_initcall+0x55/0x1ab
[  600.277693]  ? _cond_resched+0x1a/0x50
[  600.277697]  ? kmem_cache_alloc_trace+0x108/0x1b0
[  600.277700]  ? do_init_module+0x27/0x219
[  600.277703]  do_init_module+0x5f/0x219
[  600.277706]  load_module+0x28e6/0x2e00
[  600.277710]  ? ima_post_read_file+0x83/0xa0
[  600.277714]  SYSC_finit_module+0xe5/0x120
[  600.277717]  ? SYSC_finit_module+0xe5/0x120
[  600.277720]  SyS_finit_module+0xe/0x10
[  600.277723]  do_syscall_64+0x73/0x130
[  600.277726]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[  600.277728] RIP: 0033:0x7f9e240eb229
[  600.277730] RSP: 002b:7ffc26fa9c48 EFLAGS: 0246 ORIG_RAX: 
0139
[  600.277733] RAX: ffda RBX: 55cec3fcf450 RCX: 
7f9e240eb229
[  600.277734] RDX:  RSI: 55cec2823638 RDI: 
0003
[  600.277736] RBP: 55cec2823638 R08:  R09: 

[  600.277737] R10: 0003 R11: 0246 R12: 

[  600.277739] R13: 55cec3fd1230 R14: 0004 R15: 

[  600.277741] Code: 85 c0 48 89 c3 74 12 b9 00 10 00 00 48 89 c2 31 f6 
4c 89 ef e8 0c c7 ff ff 4c 89 e2 48 89 de 48 c7 c7 18 72 2f 86 e8 ca 7f 
d8 ff <0f> 0b 48 89 df e8 90 0f f4 ff 5b 41 5c 41 5d 5d c3 66 0f 1f 84

[  600.277798] ---[ end trace e693b63cde4c2a43 ]---
[  600.293683] aic94xx :03:04.0: PCI IRQ 19 -> rerouted to legacy IRQ 19
[  600.293702] aic94xx: probe of :03:04.0 failed with error -17

___
pve-user mailing list
pve-user@pve.proxmox.com

Re: [PVE-User] Ubuntu 14.04 boot fail on PVE 5.3-7

2019-01-15 Thread lord_Niedzwiedz



Hello,

since weekend I have problems with LXC containers (upstart based) booting on 
Proxmox 5.3-7. Today I have created new container and after starting it only 
few upstart processes are running:
root@ubuntu14:~# ps ax
   PID TTY  STAT   TIME COMMAND
 1 ?Ss 0:00 /sbin/init
37 ?S  0:00 @sbin/plymouthd --mode=boot --attach-to-session
39 ?Ss 0:00 plymouth-upstart-bridge
47 ?S  0:00 mountall --daemon
   283 ?S  0:00 upstart-socket-bridge --daemon
  1445 ?Ss 0:00 /bin/bash
  1481 ?R+ 0:00 ps ax
root@ubuntu14:~# ifconfig
loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


After switching to runlevel 2 container continues to boot:
root@ubuntu14:~# ifup eth0; telinit 2
root@ubuntu14:~# ps ax; ifconfig
   PID TTY  STAT   TIME COMMAND
 1 ?Ss 0:00 /sbin/init
47 ?S  0:00 mountall --daemon
   283 ?S  0:00 upstart-socket-bridge --daemon
  1445 ?Ss 0:00 /bin/bash
  1550 ?Ss 0:00 /usr/sbin/sshd -D
  1559 ?S  0:00 /bin/sh /etc/network/if-up.d/ntpdate
  1562 ?S  0:00 lockfile-create /var/lock/ntpdate-ifup
  1569 ?Ss 0:00 cron
  1585 ?Ss 0:00 /usr/sbin/irqbalance
  1687 ?Ss 0:00 /usr/lib/postfix/master
  1691 ?S  0:00 pickup -l -t unix -u -c
  1692 ?S  0:00 qmgr -l -t unix -u
  1708 ?S  0:00 /bin/sh /etc/init.d/ondemand background
  1714 ?S  0:00 sleep 60
  1716 console  Ss+0:00 /sbin/getty -8 38400 console
  1718 lxc/tty2 Ss+0:00 /sbin/getty -8 38400 tty2
  1719 lxc/tty1 Ss+0:00 /sbin/getty -8 38400 tty1
  1734 ?R+ 0:00 ps ax
eth0  Link encap:Ethernet  HWaddr 6a:f7:05:0c:43:a4
   inet addr:192.168.xxx.239  Bcast:192.168.xxx.255  Mask:255.255.255.0
   inet6 addr: fe80::68f7:5ff:fe0c:43a4/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:36 errors:0 dropped:0 overruns:0 frame:0
   TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:3647 (3.6 KB)  TX bytes:2342 (2.3 KB)

loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

What can I do to have fully booted container without runlevel switching?

BR,
Michal Szamocki
Cirrus

    Hi Michał.

I heave this same with fedora.

Try this:

chkconfig --levels 2345 network on
#systemctl disable NetworkManager

# in file /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=wanwww5
GATEWAY=84.204.162.20


Best regards
Grzegorz Misiek  ;-]
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Container problem

2018-12-04 Thread lord_Niedzwiedz



root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process 
exited with error code.
See "systemctl status pve-container@108.service" and "journalctl 
-xe" for details.


root@hayne:~# systemctl status pve-container@108.service
● pve-container@108.service - PVE LXC Container: 108
    Loaded: loaded (/lib/systemd/system/pve-container@.service; 
static; vendor preset: enabled)
    Active: failed (Result: exit-code) since Tue 2018-12-04 10:25:45 
CET; 12s ago

  Docs: man:lxc-start
    man:lxc
    man:pct
   Process: 9268 ExecStart=/usr/bin/lxc-start -n 108 (code=exited, 
status=1/FAILURE)


Dec 04 10:25:44 hayne systemd[1]: Starting PVE LXC Container: 108...
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Control 
process exited, code=exited status=1
Dec 04 10:25:45 hayne systemd[1]: Failed to start PVE LXC Container: 
108.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Unit 
entered failed state.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Failed 
with result 'exit-code'.




How about at least some minimal context and more telling logs? ^^

# lxc-start -n 108 -l DEBUG -o ct108-start.log

optionally add the "-F" flag to start the CT in foreground..

cheers,
Thomas

Ok, I brought the container back.
I was spoiled by nfs.

But now I can not restore the disk; /

root @ hayne: / Working # echo sync; qm importdisk 108 
vm-108-disk-0.raw local-lvm; sync

sync

Configuration file 'nodes / hayne / qemu-server / 108.conf' does not 
exist

Ok. I see.
Sory (no coffe)

Ok, I see, this is not a virtual machine but a container.

How to import a disk into a container (replace)?


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Container problem

2018-12-04 Thread lord_Niedzwiedz


W dniu 04.12.2018 o 10:41, Thomas Lamprecht pisze:

On 12/4/18 10:27 AM, lord_Niedzwiedz wrote:

root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process exited 
with error code.
See "systemctl status pve-container@108.service" and "journalctl -xe" for 
details.

root@hayne:~# systemctl status pve-container@108.service
● pve-container@108.service - PVE LXC Container: 108
Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor 
preset: enabled)
Active: failed (Result: exit-code) since Tue 2018-12-04 10:25:45 CET; 12s 
ago
  Docs: man:lxc-start
man:lxc
man:pct
   Process: 9268 ExecStart=/usr/bin/lxc-start -n 108 (code=exited, 
status=1/FAILURE)

Dec 04 10:25:44 hayne systemd[1]: Starting PVE LXC Container: 108...
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Control process 
exited, code=exited status=1
Dec 04 10:25:45 hayne systemd[1]: Failed to start PVE LXC Container: 108.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Unit entered 
failed state.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Failed with result 
'exit-code'.



How about at least some minimal context and more telling logs? ^^

# lxc-start -n 108 -l DEBUG -o ct108-start.log

optionally add the "-F" flag to start the CT in foreground..

cheers,
Thomas

Ok, I brought the container back.
I was spoiled by nfs.

But now I can not restore the disk; /

root @ hayne: / Working # echo sync; qm importdisk 108 vm-108-disk-0.raw 
local-lvm; sync

sync

Configuration file 'nodes / hayne / qemu-server / 108.conf' does not exist


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Container problem

2018-12-04 Thread lord_Niedzwiedz

root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process 
exited with error code.
See "systemctl status pve-container@108.service" and "journalctl -xe" 
for details.


root@hayne:~# systemctl status pve-container@108.service
● pve-container@108.service - PVE LXC Container: 108
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; 
vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2018-12-04 10:25:45 
CET; 12s ago

 Docs: man:lxc-start
   man:lxc
   man:pct
  Process: 9268 ExecStart=/usr/bin/lxc-start -n 108 (code=exited, 
status=1/FAILURE)


Dec 04 10:25:44 hayne systemd[1]: Starting PVE LXC Container: 108...
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Control 
process exited, code=exited status=1

Dec 04 10:25:45 hayne systemd[1]: Failed to start PVE LXC Container: 108.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Unit 
entered failed state.
Dec 04 10:25:45 hayne systemd[1]: pve-container@108.service: Failed with 
result 'exit-code'.


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox - CT problem

2018-11-26 Thread lord_Niedzwiedz

        Hi,
I have a debian-9-turnkey-symfony_15.0-1_amd64 container.
Which worked half a year well.
Now, every now and then, the mysql disappears into me.
How is this possible ?
I do not touch or change anything.
Any auto updates inside?
The idea of what this may be caused.
After restoring the base version, everything is ok, for a day, two and 
again it sits  ;-/


Linux walls 4.15.18-4-pve #1 SMP PVE 4.15.18-23 (Thu, 30 Aug 2018 
13:04:08 +0200) x86_64

You have mail.
root@walls ~# /etc/init.d/mysql restart
[] Restarting mysql (via systemctl): mysql.serviceFailed to restart 
mysql.service: Unit mysql.service not found.

 failed!
root@walls ~# service mysqld restart
Failed to restart mysqld.service: Unit mysqld.service not found.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] hdparm and fio in ProxMox raidZ

2018-11-07 Thread lord_Niedzwiedz

    I heave RAIDZ partition in Proxmox.
How can I test the speed of this  ??!!

I lvm i do:
hdparm -tT /dev/nvme0n1
hdparm -tT /dev/mapper/pve-root

fio --filename=/dev/mapper/pve-root --direct=1 --rw=read --bs=1m 
--size=2G --numjobs=200 --runtime=60 --group_reporting --name=file1


But in RAIDZ i dont heave /dev/mapper/pve-root.


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NVMe - RAID faster than 4GB/s

2018-11-07 Thread lord_Niedzwiedz

        How can I do an AMD processor, implements RAID faster than 4GB/s  ?

Hardware ?
Software ?

One disk work 3GB/s.
I heave 5.


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NVMe - RAID Z - Proxmox

2018-11-07 Thread lord_Niedzwiedz

        Good morning,

After a longer (unfortunately) fight, I was able to install on your 
Proxmox server on RAIDZ from 1 to 3.
To do this I had to change some settings in the BIOS (the system did not 
want to start without the "legacy" option).

https://help.komandor.pl/Wymiana/iKVM_capture.jpg
https://help.komandor.pl/Wymiana/iKVM_capture1.jpg
https://help.komandor.pl/Wymiana/iKVM_capture2.jpg

And manually edit the MBR on each of the disks.

root@gandalf8:~# sfdisk -d /dev/nvme0n1         (drives from nvme0n1 to 
nvme5n1)

label: gpt
label-id: AD03123E-3D5D-4FD2-A7F3-9B6247F88CEA
device: / dev / nvme0n1
unit: sectors
first-lba: 34        (this must be necesary set)
last-lba: 976773134

/dev/nvme0n1p1: start = 34, size = 2014, type = 
21686148-6449-6E6F-744E-656564454649, uuid = 
CA6B9D71-BFEC-4EC2-8DB5-BB79B426D20D


cfdisk - the first sector must begin:         Start 34 End 2047
Example for disk 500GB.

10007K            2014S                      BIOS boot
465.8G                976754702S                Solaris / usr & Apple ZFS
8M                       16385S                        Solaris reserved 1

And so for 5 drives.

Without this, the proxmox crashed at the installation every time.



W dniu 31.10.2018 o 15:24, lord_Niedzwiedz pisze:

I upgrade bios/firmware server motherboard Supermicro.
I set everything in the BIOS to "legacy" mode.
Not only in the boot menu.
Supermicro -> Bios -> Advence -> PCIe/PCI/Pnp Configuration 
(everything on legacy).


Weird, because in my PC (processor Ryzen), everything is on UEFI 
(windows 10 and Fedora works perfecly).


Its work now.
But

1) On sysrescue-cd one nvme disk speed = 2600MB/s.
RAID 5/6  = max 3600MB/s (on 4-5 drives).
Why not N*2600 - 2600 MB/s  ??!!

*2) *I create RAID 1    or    RAID 10.    It works.*
**But Proxmox is displayed ***a message * with RAID Z1-2.**
**https://help.komandor.pl/Wymiana/iKVM_capture.jpg**
*
3) I make install Proxmox on one m2 disk (lvm) - boot system.
I have 5 disks.
I can of course install proxmox on 1 disk (or on raid 1, two disks).
The question is how to add other disks?
Is it worth creating a RAID Z from the other 3-4 disks?
What configuration would you recommend?


Im trying to install Proxmox on 4 NVMe drives.
One on the motherboard, two on the PCIe.

Proxmox see everything at the installation.
I give the option zfs (RAIDZ-1).

And I get a mistake error at the end.
"unable to create zfs root pool"
GRUB is not yet working with ZFS on EFI. Try to switch to legacy 
boot in

BIOS if possible or use LVM for the installation.


Attached pictures (1-5) .jpg.
https://help.komandor.pl/Wymiana/1.jpg
https://help.komandor.pl/Wymiana/2.jpg
https://help.komandor.pl/Wymiana/3.jpg
https://help.komandor.pl/Wymiana/4.jpg
https://help.komandor.pl/Wymiana/5.jpg





___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] hdparm

2018-10-31 Thread lord_Niedzwiedz

        How i make test RAID/Volume speed in Proxmox ?
hdparm -tT ... ?
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NVMe - RAID Z - Proxmox

2018-10-31 Thread lord_Niedzwiedz

I upgrade bios/firmware server motherboard Supermicro.
I set everything in the BIOS to "legacy" mode.
Not only in the boot menu.
Supermicro -> Bios -> Advence -> PCIe/PCI/Pnp Configuration (everything 
on legacy).


Weird, because in my PC (processor Ryzen), everything is on UEFI 
(windows 10 and Fedora works perfecly).


Its work now.
But

1) On sysrescue-cd one nvme disk speed = 2600MB/s.
RAID 5/6  = max 3600MB/s (on 4-5 drives).
Why not N*2600 - 2600 MB/s  ??!!

*2) *I create RAID 1    or    RAID 10.    It works.*
**But Proxmox is displayed ***a message * with RAID Z1-2.**
**https://help.komandor.pl/Wymiana/iKVM_capture.jpg**
*
3) I make install Proxmox on one m2 disk (lvm) - boot system.
I have 5 disks.
I can of course install proxmox on 1 disk (or on raid 1, two disks).
The question is how to add other disks?
Is it worth creating a RAID Z from the other 3-4 disks?
What configuration would you recommend?


Im trying to install Proxmox on 4 NVMe drives.
One on the motherboard, two on the PCIe.

Proxmox see everything at the installation.
I give the option zfs (RAIDZ-1).

And I get a mistake error at the end.
"unable to create zfs root pool"

GRUB is not yet working with ZFS on EFI. Try to switch to legacy boot in
BIOS if possible or use LVM for the installation.


Attached pictures (1-5) .jpg.
https://help.komandor.pl/Wymiana/1.jpg
https://help.komandor.pl/Wymiana/2.jpg
https://help.komandor.pl/Wymiana/3.jpg
https://help.komandor.pl/Wymiana/4.jpg
https://help.komandor.pl/Wymiana/5.jpg





___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NVMe

2018-10-30 Thread lord_Niedzwiedz

        proxmox-ve_5.2-1.iso

Which os?

On Tue, Oct 30, 2018, 10:08 AM lord_Niedzwiedz  wrote:


  I set legacy boot in bios.
Use only one disk with lvm.
And system not start with this.

Any sugestion ?

I have a problem.
Im trying to install Proxmox on 4 NVMe drives.
One on the motherboard, two on the PCIe.

Proxmox see everything at the installation.
I give the option zfs (RAIDZ-1).

And I get a mistake error at the end.
"unable to create zfs root pool"

GRUB is not yet working with ZFS on EFI. Try to switch to legacy boot in
BIOS if possible or use LVM for the installation.


Attached pictures (1-5) .jpg.
https://help.komandor.pl/Wymiana/1.jpg
https://help.komandor.pl/Wymiana/2.jpg
https://help.komandor.pl/Wymiana/3.jpg
https://help.komandor.pl/Wymiana/4.jpg
https://help.komandor.pl/Wymiana/5.jpg


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NVMe

2018-10-29 Thread lord_Niedzwiedz

        Hi,
I have a problem.
Im trying to install Proxmox on 4 NVMe drives.
One on the motherboard, two on the PCIe.

Proxmox see everything at the installation.
I give the option zfs (RAIDZ-1).

And I get a mistake error at the end.
"unable to create zfs root pool"

Attached pictures (1-5) .jpg.
https://help.komandor.pl/Wymiana/1.jpg
https://help.komandor.pl/Wymiana/2.jpg
https://help.komandor.pl/Wymiana/3.jpg
https://help.komandor.pl/Wymiana/4.jpg
https://help.komandor.pl/Wymiana/5.jpg

greetings
Grzegorz
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox usb automount

2018-10-04 Thread lord_Niedzwiedz



root@hayneee:~# apt install pve5-usb-automount
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package pve5-usb-automount


apt install pve5-usb-automount




On Oct 3, 2018, at 05:16, lord_Niedzwiedz 
mailto:sir_misi...@o2.pl>> wrote:

  Hi,

Why i make easy add auto mount usb device to ProxMox.

example:
/dev/sde1    /media/sde1

normally this is done by an desktop environment, not often used for
headless Proxmox VE setups.

You may want to take a look at udisks2[0][1], there some wikis[2]/tutorials
showing it's usage.

Playing around with udev/automountfs could also be an option.

[0]: https://www.freedesktop.org/wiki/Software/udisks/
[1]: https://packages.debian.org/en/stretch/udisks2
[2]: https://wiki.archlinux.org/index.php/udisks

apt install udisks2
vim /etc/fstab
/dev/sde1 /media/sde1    auto 
defaults    0 0


Its work fine (with auto sync after data copy).
Thanks

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox usb automount

2018-10-04 Thread lord_Niedzwiedz

root@hayneee:~# apt install pve5-usb-automount
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package pve5-usb-automount


apt install pve5-usb-automount




On Oct 3, 2018, at 05:16, lord_Niedzwiedz 
mailto:sir_misi...@o2.pl>> wrote:

 Hi,

Why i make easy add auto mount usb device to ProxMox.

example:
/dev/sde1/media/sde1

Regards
Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox usb automount

2018-10-03 Thread lord_Niedzwiedz

    Hi,

Why i make easy add auto mount usb device to ProxMox.

example:
/dev/sde1    /media/sde1

Regards
Gregor

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fedora 28

2018-09-12 Thread lord_Niedzwiedz

        How i make this upgrading  ??!!

root@hayne1:~# apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@hayne1:~# apt update
Ign:1 http://ftp.pl.debian.org/debian stretch InRelease
Hit:2 http://ftp.pl.debian.org/debian stretch-updates InRelease
Hit:3 http://ftp.pl.debian.org/debian stretch Release
Hit:4 http://security.debian.org stretch/updates InRelease
Ign:6 https://enterprise.proxmox.com/debian/pve stretch InRelease
Err:7 https://enterprise.proxmox.com/debian/pve stretch Release
  401  Unauthorized
Reading package lists... Done
E: The repository 'https://enterprise.proxmox.com/debian/pve stretch 
Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is 
therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user 
configuration details.



W dniu 12.09.2018 o 09:27, Stoiko Ivanov pisze:

Hi,

The problem with Fedora containers was fixed with pve-container 2.0-25.
Could you try again after upgrading?

Cheers,
stoiko

On Wed, Sep 12, 2018 at 08:31:11AM +0200, lord_Niedzwiedz wrote:

root@hayne:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-3
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

W dniu 11.09.2018 o 16:31, Stoiko Ivanov pisze:

Hi,

cannot reproduce the problem with a similar config (mac and ip addresses
changed, but else the same).

which versions of our stack do you run? (please post the output of
pveversion -v).

Thanks!


On Tue, Sep 11, 2018 at 03:08:35PM +0200, lord_Niedzwiedz wrote:

Hi,

I did not change anything except the two things below.
And fedora works with network after restart.

root@hayne1:/rpool# pct config 102
arch: amd64
cores: 2
hostname: wanwww14
memory: 8192
net0: 
name=eth0,bridge=vmbr0,firewall=1,gw=8.8.152.1,hwaddr=F5:E4:9B:64:22:84,ip=8.8.152.104/24,type=veth
ostype: fedora
rootfs: local-zfs:subvol-102-disk-1,size=28G
swap: 1024
unprivileged: 1

I don't change antyhing.
This is config from Proxmox GUI inside Fedora.
[root@wanwww8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
IPADDR=8.8.152.104
NETMASK=255.255.255.0
GATEWAY=8.8.152.1

Only these two changes below are required for the Fedora to start up with a
working network.
Of course, the native server should also have properly configured dns, etc.

Gregory Bear


Hi,

could you please send the container-config (`pct config $vmid`) from the
node and the contents of all files (redact if needed) from
/etc/systemd/network/* ?

Thanks!

On Tue, Sep 11, 2018 at 02:38:59PM +0200, lord_Niedzwiedz wrote:

        Hi,
I get yours offical Fedora 27.

you should now be able to get the Fedora 28 template directly from us.
# pveam update

should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through the WebUI or with CLI:

# pveam download STORAGE fedora-28-default_20180907_amd64.tar.xz

cheers,
Thomas

Problem is in configuration fedora system inside (fedora-28 too).

I must add this two things:

chkconfig --levels 2345 network on

vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ala_ma_kota
GATEWAY=88.44.152.1

Now i restart contener and ip from Proxmox works.

ip a

Cheers,
Gregory Bear



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fedora 28

2018-09-12 Thread lord_Niedzwiedz

root@hayne:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-3
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

W dniu 11.09.2018 o 16:31, Stoiko Ivanov pisze:

Hi,

cannot reproduce the problem with a similar config (mac and ip addresses
changed, but else the same).

which versions of our stack do you run? (please post the output of
pveversion -v).

Thanks!


On Tue, Sep 11, 2018 at 03:08:35PM +0200, lord_Niedzwiedz wrote:

Hi,

I did not change anything except the two things below.
And fedora works with network after restart.

root@hayne1:/rpool# pct config 102
arch: amd64
cores: 2
hostname: wanwww14
memory: 8192
net0: 
name=eth0,bridge=vmbr0,firewall=1,gw=8.8.152.1,hwaddr=F5:E4:9B:64:22:84,ip=8.8.152.104/24,type=veth
ostype: fedora
rootfs: local-zfs:subvol-102-disk-1,size=28G
swap: 1024
unprivileged: 1

I don't change antyhing.
This is config from Proxmox GUI inside Fedora.
[root@wanwww8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
IPADDR=8.8.152.104
NETMASK=255.255.255.0
GATEWAY=8.8.152.1

Only these two changes below are required for the Fedora to start up with a
working network.
Of course, the native server should also have properly configured dns, etc.

Gregory Bear


Hi,

could you please send the container-config (`pct config $vmid`) from the
node and the contents of all files (redact if needed) from
/etc/systemd/network/* ?

Thanks!

On Tue, Sep 11, 2018 at 02:38:59PM +0200, lord_Niedzwiedz wrote:

           Hi,
I get yours offical Fedora 27.

you should now be able to get the Fedora 28 template directly from us.
# pveam update

should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through the WebUI or with CLI:

# pveam download STORAGE fedora-28-default_20180907_amd64.tar.xz

cheers,
Thomas

Problem is in configuration fedora system inside (fedora-28 too).

I must add this two things:

chkconfig --levels 2345 network on

vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ala_ma_kota
GATEWAY=88.44.152.1

Now i restart contener and ip from Proxmox works.

ip a

Cheers,
Gregory Bear



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fedora 28

2018-09-11 Thread lord_Niedzwiedz



         Hi,
I get yours offical Fedora 27.


you should now be able to get the Fedora 28 template directly from us.
# pveam update

should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through the WebUI or with CLI:

# pveam download STORAGE fedora-28-default_20180907_amd64.tar.xz

cheers,
Thomas

Problem is in configuration fedora system inside (fedora-28 too).

I must add this two things:

chkconfig --levels 2345 network on

vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ala_ma_kota
GATEWAY=88.44.152.1

Now i restart contener and ip from Proxmox works.

ip a

Cheers,
Gregory Bear

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fedora 28

2018-09-11 Thread lord_Niedzwiedz

Hi,

I did not change anything except the two things below.
And fedora works with network after restart.

root@hayne1:/rpool# pct config 102
arch: amd64
cores: 2
hostname: wanwww14
memory: 8192
net0: 
name=eth0,bridge=vmbr0,firewall=1,gw=8.8.152.1,hwaddr=F5:E4:9B:64:22:84,ip=8.8.152.104/24,type=veth

ostype: fedora
rootfs: local-zfs:subvol-102-disk-1,size=28G
swap: 1024
unprivileged: 1

I don't change antyhing.
This is config from Proxmox GUI inside Fedora.
[root@wanwww8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
IPADDR=8.8.152.104
NETMASK=255.255.255.0
GATEWAY=8.8.152.1

Only these two changes below are required for the Fedora to start up 
with a working network.

Of course, the native server should also have properly configured dns, etc.

Gregory Bear


Hi,

could you please send the container-config (`pct config $vmid`) from the
node and the contents of all files (redact if needed) from
/etc/systemd/network/* ?

Thanks!

On Tue, Sep 11, 2018 at 02:38:59PM +0200, lord_Niedzwiedz wrote:

          Hi,
I get yours offical Fedora 27.

you should now be able to get the Fedora 28 template directly from us.
# pveam update

should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through the WebUI or with CLI:

# pveam download STORAGE fedora-28-default_20180907_amd64.tar.xz

cheers,
Thomas

Problem is in configuration fedora system inside (fedora-28 too).

I must add this two things:

chkconfig --levels 2345 network on

vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ala_ma_kota
GATEWAY=88.44.152.1

Now i restart contener and ip from Proxmox works.

ip a

Cheers,
Gregory Bear



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fedora 28

2018-09-06 Thread lord_Niedzwiedz

        Hi,
I get yours offical Fedora 27.
And this didint work properly.

After added file /etc/sysconfig/network I upgrade distribution.
Works fine  ;)

Regards,
Gregory

Hi,

where did you get the container image from?

(Currently we do not yet have an official Fedora 28 template)
The ones from http://uk.images.linuxcontainers.org/images/ do work with
a static ip set.

Recently we changed the network setup for Fedora >= 27, to create
systemd-networkd files, since this is what is used in the upstream
templates.

Maybe just install systemd-networkd?

Regards,
stoiko

On Wed, 5 Sep 2018 15:48:40 +0200
lord_Niedzwiedz  wrote:


      Container not set static ip withou this file   ;-/

I must add:

vi /etc/sysconfig/network

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Fedora 28

2018-09-05 Thread lord_Niedzwiedz

    Container not set static ip withou this file   ;-/

I must add:

vi /etc/sysconfig/network

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user