Re: [PVE-User] TASK ERROR: VM quit/powerdown failed

2017-01-30 Thread Yannis Milios
Hello,

Maybe these two links can help?


https://pve.proxmox.com/wiki/Qemu-guest-agent

https://pve.proxmox.com/wiki/Acpi_kvm

Yannis



On Mon, 30 Jan 2017 at 23:47, Leonardo Dourado <
leonardo.dour...@itrace.com.br> wrote:

> Hello guys!
>
> I am trying to run a programmed backup (STOP Mode) and I believe it
> requires the shutdown of the VM (Windows Server 2008 R2)... For some reason
> it's not working, I get the message "TASK ERROR: VM quit/powerdown failed"
> when it tries to run...
>
> I have installed the VirtIO drivers on the guest (The QEMU is also
> enabled) . If the machine is off, the backup goes perfectly.
>
> Any help is very welcome!
>
> Leonardo D.
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] One cluster at two sites with an even number of nodes

2017-01-30 Thread Dmitry Petuhov
30.01.2017 20:48, Marco M. Gabriel wrote:
> is there kind of a best practice how to deploy a cluster with an even
> number of nodes at two different sites?
>
> In case of a split between the datacenters, none of both sites would have a
> quorum and HA would not work. I could fiddle with the number of votes on
> one site, but then HA would only work on one site if there is a connection
> loss.
>
> Any other hints maybe? Or is this a no go?
You could use single node (maybe old/weak, just for quorum) in third 
datacenter. But for HA between datacenters, you need shared storage that will 
remain available if one datacenter is offline.


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] TASK ERROR: VM quit/powerdown failed

2017-01-30 Thread Leonardo Dourado
Hello guys!

I am trying to run a programmed backup (STOP Mode) and I believe it requires 
the shutdown of the VM (Windows Server 2008 R2)... For some reason it's not 
working, I get the message "TASK ERROR: VM quit/powerdown failed" when it tries 
to run...

I have installed the VirtIO drivers on the guest (The QEMU is also enabled) . 
If the machine is off, the backup goes perfectly.

Any help is very welcome!

Leonardo D.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] One cluster at two sites with an even number of nodes

2017-01-30 Thread Michael Rasmussen
On Mon, 30 Jan 2017 17:48:48 +
"Marco M. Gabriel"  wrote:

> 
> Any other hints maybe? Or is this a no go?
> 
It is possible but the costs is high. 300,000-400,000 € hardware at each
datacenter (Virtual cluster, infrastructure, and SAN). Our computer
facilities supports 30,000 users.

At work we have two datacenters, actually three, on campus which is all
interconnected with dual 10 Gb fibers. The clusters remains at the two
datacenters while the central switch center using the Cisco phone
system cluster acts as witness for the other two datacenters.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Happiness adds and multiplies as we divide it with others.


pgpaTLWDxLmMa.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] One cluster at two sites with an even number of nodes

2017-01-30 Thread Kevin Lemonnier
> 
> In case of a split between the datacenters, none of both sites would have a
> quorum and HA would not work. I could fiddle with the number of votes on
> one site, but then HA would only work on one site if there is a connection
> loss.
> 

Yeah, you can't do it. That's just a recipe for disaster for that exact reason,
no side has a way to know if it should be taking over. If you need to be able
to have either side take over, just don't do HA and have an admin choose where
to boot the VMs (you can have a cluster without HA to make that easy).

Alternativly you can use a node in a third location to help the "good" side, if 
any,
get a quorum. Something like a VM somewhere would be enough, it doesn't have to
actually be able to run VMs.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] One cluster at two sites with an even number of nodes

2017-01-30 Thread Dietmar Maurer
> In case of a split between the datacenters, none of both sites would have a
> quorum and HA would not work. I could fiddle with the number of votes on
> one site, but then HA would only work on one site if there is a connection
> loss.

I would not use such setup because of above problem. Instead, simply use
a separate cluster on each site.

Note: Most times it makes no sense to share resources/storage between different
datacenters.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] One cluster at two sites with an even number of nodes

2017-01-30 Thread Marco M. Gabriel
Hello,

is there kind of a best practice how to deploy a cluster with an even
number of nodes at two different sites?

In case of a split between the datacenters, none of both sites would have a
quorum and HA would not work. I could fiddle with the number of votes on
one site, but then HA would only work on one site if there is a connection
loss.

Any other hints maybe? Or is this a no go?

Thanks,
Marco
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Point my backup to ISCSI Volume (to Emmanuel Kasper)

2017-01-30 Thread Leonardo Dourado
Hi Emannuel!

What I did (and the only way that worked ) is:

I pointed my iscsi on PVE (Web Interface);
Partitioned, Formated (ext4), Created a directory and mounted on /mnt/storage;
Added the line "sleep 60 ; mount /dev/sdb1 /mnt/storage";

That was the only way it worked...

I'll try the "step B" as mentioned when I have more time!!!

Much apreciatted,
Leonardo D.


De: pve-user [pve-user-boun...@pve.proxmox.com] em nome de 
pve-user-requ...@pve.proxmox.com [pve-user-requ...@pve.proxmox.com]
Enviado: segunda-feira, 30 de janeiro de 2017 9:00
Para: pve-user@pve.proxmox.com
Assunto: pve-user Digest, Vol 106, Issue 27

Send pve-user mailing list submissions to
pve-user@pve.proxmox.com

To subscribe or unsubscribe via the World Wide Web, visit
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
or, via email, send a message with subject or body 'help' to
pve-user-requ...@pve.proxmox.com

You can reach the person managing the list at
pve-user-ow...@pve.proxmox.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of pve-user digest..."


Today's Topics:

   1. Point my backup to ISCSI Volume (Leonardo Dourado)
   2. Re: software RAID in 4.2 (Miguel González)
   3. Re: Point my backup to ISCSI Volume (Emmanuel Kasper)


--

Message: 1
Date: Sun, 29 Jan 2017 19:45:29 +
From: Leonardo Dourado 
To: "pve-user@pve.proxmox.com" 
Subject: [PVE-User] Point my backup to ISCSI Volume
Message-ID:

Content-Type: text/plain; charset="us-ascii"

Hi gents!

I have configured my NAS and created a LUN to point my backups, I added it on 
PVE (it recognized correctly)... Occurs that when I go to Backup\Storage (on 
web interface) and try to point to the LUN (named as ITRCNAS1) unit it does not 
appear...

If I point to the mounted directory on console it works: /mnt/storage.

Is there a way to use the LUN directly? I'm not sure if I can mount a LUN 
volume on fstab...

LUN is already partitioned and formatted as ext4, it's my /dev/sdb1 device.

Note:
On Web Interface I see the format of that unit is still RAW.
Any help is much appreciated,
Leonardo D.


--

Message: 2
Date: Mon, 30 Jan 2017 10:15:32 +0100
From: Miguel González 
To: PVE User List 
Subject: Re: [PVE-User] software RAID in 4.2
Message-ID: <37a565be-1b49-b791-6045-247a073d1...@yahoo.es>
Content-Type: text/plain; charset=utf-8

On 01/24/17 10:22 AM, Eneko Lacunza wrote:
> Hi Miguel,
>
> El 24/01/17 a las 10:11, Miguel González escribió:
>> Reads and in both, Proxmox and therefore in guest VMs.
>>
>> As you can see in my messages writes seem to be more or less fine around
>> 140 MB/s in Proxmox while reads are around 40 MB/s.
>>
>> At this point I don´t know if there is something related to hardware or
>> software, I have raised a ticket with support.
>
> Ok, I don't think pveperf's "fsyncs/second" is good, it's way too low.
>
> Also, 42MB/s isn't very good from VM, but maybe qcow2 is expanding the
> file there.
>
> Re-reading your first post I think you have a hw/cabling issue. Does sdb
> give same hdparm results as sda?

Ok, suggested by support I took down all VMs and run the tests without
nothing running. I still find fsyncs low from time to time, which I
don´t understand, varying from 2-40 is a bit too much variation from my
point of view.


root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1099581
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 138.73 MB/sec
AVERAGE SEEK TIME: 18.94 ms
FSYNCS/SECOND: 1.97
DNS EXT: 17.82 ms
DNS INT: 13.27 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1162253
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 123.24 MB/sec
AVERAGE SEEK TIME: 13.28 ms
FSYNCS/SECOND: 39.64
DNS EXT: 11.67 ms
DNS INT: 19.81 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1123387
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 121.61 MB/sec
AVERAGE SEEK TIME: 13.83 ms
FSYNCS/SECOND: 22.67
DNS EXT: 14.01 ms
DNS INT: 13.90 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1138502
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 100.28 MB/sec
AVERAGE SEEK TIME: 13.74 ms
FSYNCS/SECOND: 41.03
DNS EXT: 13.72 ms
DNS INT: 12.70 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1102421
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 134.41 MB/sec
AVERAGE SEEK TIME: 12.74 ms
FSYNCS/SECOND: 30.34
DNS EXT: 11.25 ms
DNS INT: 14.16 ms (myserver)
root@myserver:~# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 16160 MB in 2.00 seconds = 8087.23 MB/sec
Timing buffered 

Re: [PVE-User] software RAID in 4.2

2017-01-30 Thread Miguel González
On 01/24/17 10:22 AM, Eneko Lacunza wrote:
> Hi Miguel,
> 
> El 24/01/17 a las 10:11, Miguel González escribió:
>> Reads and in both, Proxmox and therefore in guest VMs.
>>
>> As you can see in my messages writes seem to be more or less fine around
>> 140 MB/s in Proxmox while reads are around 40 MB/s.
>>
>> At this point I don´t know if there is something related to hardware or
>> software, I have raised a ticket with support.
> 
> Ok, I don't think pveperf's "fsyncs/second" is good, it's way too low.
> 
> Also, 42MB/s isn't very good from VM, but maybe qcow2 is expanding the
> file there.
> 
> Re-reading your first post I think you have a hw/cabling issue. Does sdb
> give same hdparm results as sda?

Ok, suggested by support I took down all VMs and run the tests without
nothing running. I still find fsyncs low from time to time, which I
don´t understand, varying from 2-40 is a bit too much variation from my
point of view.


root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1099581
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 138.73 MB/sec
AVERAGE SEEK TIME: 18.94 ms
FSYNCS/SECOND: 1.97
DNS EXT: 17.82 ms
DNS INT: 13.27 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1162253
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 123.24 MB/sec
AVERAGE SEEK TIME: 13.28 ms
FSYNCS/SECOND: 39.64
DNS EXT: 11.67 ms
DNS INT: 19.81 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1123387
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 121.61 MB/sec
AVERAGE SEEK TIME: 13.83 ms
FSYNCS/SECOND: 22.67
DNS EXT: 14.01 ms
DNS INT: 13.90 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1138502
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 100.28 MB/sec
AVERAGE SEEK TIME: 13.74 ms
FSYNCS/SECOND: 41.03
DNS EXT: 13.72 ms
DNS INT: 12.70 ms (myserver)
root@myserver:~# pveperf /vz/
CPU BOGOMIPS: 42667.60
REGEX/SECOND: 1102421
HD SIZE: 1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS: 134.41 MB/sec
AVERAGE SEEK TIME: 12.74 ms
FSYNCS/SECOND: 30.34
DNS EXT: 11.25 ms
DNS INT: 14.16 ms (myserver)
root@myserver:~# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 16160 MB in 2.00 seconds = 8087.23 MB/sec
Timing buffered disk reads: 434 MB in 3.00 seconds = 144.60 MB/sec
root@myserver:~# hdparm -tT /dev/sdb

/dev/sdb:
Timing cached reads: 16492 MB in 2.00 seconds = 8253.77 MB/sec
Timing buffered disk reads: 456 MB in 3.01 seconds = 151.44 MB/sec

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user