Does anybody have idea how it possible to fix?
чт, 4 окт. 2018 г. в 13:53, Eduard Ahmatgareev :
> Hi,
>
> I tried to create new backup job on:
>
> root@cluster-13-1:/usr/share/perl5/PVE# pveversion -v
> proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
> pve-manager: 5.2-9
Hi,
I tried to create new backup job on:
root@cluster-13-1:/usr/share/perl5/PVE# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-9 (running version: 5.2-9/4b30e8f9)
pve-kernel-4.15: 5.2-8
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
Hi All,
Does anybody has issue with CLI on latest version?
root@cluster-13-1:~# pvesh get /access/users --output-format json-pretty
[
{
"enable" : 1,
"expire" : 0,
"userid" : "apicontrol1@pve"
},
{
"email" : "apicont...@admin.ru",
"enable" : 1,
Hi. I have small trouble with VNC connect from api to proxmox.
I tried run command:
root@cluster-2-2:~# pvesh create /nodes/cluster-2-2/qemu/116/vncproxy
and after 10 seconds I got next:
nc6: connection timed out
command '/bin/nc6 -l -p 5900 -w 10 -e '/usr/sbin/qm vncproxy 116
2>/dev/null''
I had problem with watchdog on ipmi, that's why I uses for check next:
ii pve-manager 4.1-5
amd64The Proxmox Virtual Environment
check: ii status package and test watchdog:
echo "A" | socat - UNIX-CONNECT:/var/run/watchdog-mux.sock
after this command, server
I tried add new node to proxmox cluster and had problem:
pvecm add cluster_ip --force
Are you sure you want to continue connecting (yes/no)? yes
node cluster-2-4 already defined
copy corosync auth key
stopping pve-cluster service
backup old database
Job for corosync.service failed. See
zfsutils: 0.6.5-pve7~jessie
from iso:
proxmox-ve_4.1-2f9650d4-21.iso
2016-02-04 12:01 GMT+02:00 Thomas Lamprecht <t.lampre...@proxmox.com>:
>
>
> On 02/04/2016 10:58 AM, Eduard Ahmatgareev wrote:
>
>> I tried add new node to proxmox cluster and had problem:
>>
Proxmox api: http://pve.proxmox.com/pve2-api-doc/ opening white page in
this moment.
Could you help me with query to api, I need get status vmid lock at this
time or no, and if vmid not backup or other action at this time need unlock
this vmid.
Often after failed backup, vmid remains with lock
Am I understanding correctly? need:
1. Share ISCSI target on storage
2. In proxmox gui add this iscsi target to storages
3. create lvm over drive iscsi: pvcreate /dev/sdb
4. create vg vgcreate LVM01 /dev/sdb
5. Add this LVM01 to gui interface and check to shared
I right?
When I add iscsi to gui
my soulition:
cat /etc/default/clvm
# Bourne shell compatible script, sourced by /etc/init.d/clvm to set
# additional arguments for clvmd.
# enable clmvd
START_CLVM=yes
# Startup timeout:
CLVMDTIMEOUT=180
# Volume groups to activate on startup:
LVM_VGS="LVM01 LVM02 LVM03 LVM04 SSD01 SSD02"
h all get results from the api
> in the browser nicely formatted with bootstrap, for example if your Proxmox
> VE IP is 10.10.10.3 use:
>
> https://10.10.10.3:8006/api2/html/cluster/tasks
>>
>
> To see a quick view from what you get. pvesh is also a handy tool :)
>
> regards,
&g
Could you please help me, I can't find how get current running tasks from
api?
I need get all info about current running task and info about connected
this task to vmid
I try run pvesh get /nodes/{node}/tasks , but this query return only
completed task.
This need to me for manual create
12 matches
Mail list logo