Noticed this when testing restoration of win 7 memory snapshots - the vm is
working fine when restored, except for mouse integration, it is no longer
transparent, i.e the mouse is captured when clicking in the spice viewerr.
The only way to solve is by either rebooting the VM or restarting the
I saw that Windows VSS support via qemu-ga was confirmed for the feature plan
back in April:
http://comments.gmane.org/gmane.linux.pve.user/1008
Will it be integrated with snapshots and and scheduled backups? any idea of a
time scale?
Thanks, and thanks for the great product. Having a lot of
Is it possible to change the default Graphic Card for new VM's to Spice?
thanks,
--
Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
We backup our XenServer production server images to a removeable usb
disks overnight, that are rotated daily and stored offsite.
With ProxMox I'm thinking of the following:
- On ProxMox server ensure the same mount point for the drives is used
via suitable udev rules
- Make it available as
I've been testing importing our XenServer (6.2) Windows VM's to
ProxMox and have settled on a technique that works well for me
- Tried using the native VHD disk file, Apart from the painfull
process of identifying which one is correct, the qemu-img conversion
process gave me a lot of grief.
The forum seems rather more active.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
The configuration for each vm is stored in simple conf files, e.g for
VM 101 on Nobe PMX1 the conf file would be:
/etc/pve/nodes/pmx1/qemu-
server/101.conf
The format is simple text and easy to edit yourself. Just shutdown the VM first!
On 4 December 2013 08:43, Richard Laager
Is there anyway to add to the backup email from the job-start phase?
e.g - I'm testing for a mounted usb drive in the job-start phase and it would
be nice to indicate in the email as to why the job was failed.
thanks,
--
Lindsay
signature.asc
Description: This is a digitally signed message
On Sun, 15 Dec 2013 01:16:14 PM Dietmar Maurer wrote:
Anything you write to stdout/stderr should show up in the backup log.
Only the stuff printed after each individual backup-start seems to show up,
in the vm backup log. Output in job-start, before the vm backups themselves
is not there.
--
Ah, I was just about to write that $logfd wasn't working for me :) a
list of the args in the script only reveals one argument being passed,
the phase (job-start);
Not to worry,
thanks.
On 16 December 2013 15:26, Dietmar Maurer diet...@proxmox.com wrote:
Only the stuff printed after each
On 16 December 2013 15:55, Dietmar Maurer diet...@proxmox.com wrote:
Ah, I was just about to write that $logfd wasn't working for me :) a list of
the
args in the script only reveals one argument being passed, the phase (job-
start);
Not to worry,
Sorry, that hint was wrong. Please read my
Thanks
On Dec 16, 2013 5:09 PM, Dietmar Maurer diet...@proxmox.com wrote:
Just to clarify - we can't currently write to the *email* log in the
job-start phase.
Yes, that does not work.
___
pve-user mailing list
pve-user@pve.proxmox.com
On Wed, 18 Dec 2013 06:16:54 AM Lindsay Mathieson wrote:
Is there a template meant for use as a LAN-LAN
VPN Gateway?
thanks,
ps. Preferably something compatible with Netgear Routers.
--
Lindsay
signature.asc
Description: This is a digitally signed message part
Is there a template meant for use as a LAN-LAN
VPN Gateway?
thanks,
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
pve-user@pve.proxmox.com
Mine are pitiful - new install and can't resist fiddling with it :) Currently
at 5 days.
Anyone over a year?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
pve-user@pve.proxmox.com
On Fri, 27 Dec 2013 04:17:49 PM Daejuan Jacobs wrote:
For some reason using SPICE (qxl) with an Ubuntu guest VM does not want to
work. I can install most of the time, but when I login and attempt to
launch any application, it freezes and the CPU load goes 100%+.
Have you installled
On Thu, 9 Jan 2014 05:02:52 PM Gilberto Nunes wrote:
And it is there!
But even so, I have more then 30 files on Backkup storage...
Maybe I need set the maxfile parameters on vzdump.cron...
Its 30 backups per VM, not 30 per storage. Does that account for what you are
seeing?
--
Lindsay
On Sat, 1 Feb 2014 10:17:07 PM Tonči Stipičević wrote:
I forgot to write that I used apt-get dist-upgrade
but nothing changed and subscription warning is still on while logging in
Here is my update , dist-upgrade , pveversion - output
You still have the enterprise repo enabled, you
On Tue, 4 Feb 2014 02:14:31 PM Frank, Petric wrote:
Installed:
Proxmox 3.1 (dist-upgrade from today)
Spice-Proxy: virt-viewer-x64-0.6.0.msi (as linked from proxmox wiki page)
It seems that spiceproxy does not define the release-cursor key and the
default Ctrl-Alt is not working under
On Tue, 4 Feb 2014 02:46:46 PM Frank, Petric wrote:
added the lines 1403 to 1405 and restarted pvedaemon.
Works now.
Excellent.
Thanks for your fast support. Maybe it con be made configurable in the next
proxmox release.
I don't think it will be configurable, but the same setting will
On 12 February 2014 17:13, Dietmar Maurer diet...@proxmox.com wrote:
I have same problems. One VM sometimes stops backup on 40-43%, leaving it
locked. Very annoying.
Is the VM image OK? Please check with 'qemu-img check ...' . We already
observed such
behavior when the image format contains
On 18 February 2014 09:36, Bruce B bruceb...@gmail.com wrote:
Hi everyone,
I am looking for a quick and reliable off-site backup service provider and
solution that can backup my Proxmox data based on below conditions:
- Be mindful of bandwidth usage and backup only the changed files / data
Took me a while to figure this out - had a few Windows 7 VM's, if I
left them running, eventually I could spice connect, but nothing would
happen - spice client would just be a little black box.
Turned out the vm had powersaving enabled and was going into
hibernation. I can turn that off easy
I see that the latest non-commercial updates are:
The following NEW packages will be installed:
gdisk hdparm libicu48 librados2-perl libuuid-perl module-init-tools
powermgmt-base pve-kernel-2.6.32-27-pve spiceterm
The following packages will be upgraded:
apt apt-utils base-files ceph-common
I'm testing backing up a vm to a USB Drive attached to a Qnap NAS,
shared out via samba.
It consistently stalls at 17% and can't be stopped via the web gui.
The only thing that works is a reboot of the server and the vm has to
be unlocked.
The samba share is mounted via the following in fstab:
On 24 February 2014 16:38, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
I'm testing backing up a vm to a USB Drive attached to a Qnap NAS,
shared out via samba.
It consistently stalls at 17% and can't be stopped via the web gui.
The only thing that works is a reboot of the server
Is there a best solution for automounting USB drives on a proxmox node? is
usbmount the way to go?
https://wiki.debian.org/usbmount
Note: This is for automounting of backup drives.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Tue, 25 Feb 2014 11:12:52 AM Diaolin wrote:
If you are root on the proxmox node you can mount manually
(it's better in my opinion) or use usbmount as you said.
I need automount so that office staff can swap drives without tech knowledge.
If you will use the usb as passthrough you shouldn't
Wondering if anyone can suggest a reason for this. I have 7 VM's being backed
off a node, direct to a USB3 external disk via a Directory based Storage
6 of the VM's have backup times varying from 8 min to 52 min. The dh -h
sizes of their disks vary from 16GB to 57GB.
The 7th vm has a du size
servers to proxmox - all three of
them, but we
separated out functionality so they became 5 virtual servers as well.
Have migrated the office desktops as well and are working on the dev ones too.
Developers
actually quite like using the spice client, its better than the VMWare one.
Lindsay
To clarify, I'm perfectly happy wit the backup times, just wondering
why one VM takes so much more time then other similar VM's.
More info, its actually a clone of another VM. The original only takes
37min, compared to 240 min for its clone.
--
Lindsay
The KVM release notes (http://wiki.qemu.org/ChangeLog/1.7#Guest_agent)
Mention that the windows guest agent now supports VSS.
Any idea where I could download the agent from?
NB: I am not a redhat customer.
--
Lindsay
___
pve-user mailing list
To clarify, I'm happy with my backup times, just wondering why one VM takes
radically longer than the others.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
pve-user@pve.proxmox.com
We had a power failure this morning and yes, haven't sorted out our UPS yet
... :(
Everything autostarted ok, but because the NAS is so much slower to start up
than the Proxmox nodes all the VM's on the shared storage failed to autostart.
Once the NAS was up, the shared storage was
On Sun, 2 Mar 2014 10:05:58 AM Lindsay Mathieson wrote:
We had a power failure this morning and yes, haven't sorted out our UPS yet
It occurs to me this could still be a problem even with a UPS. Any power
outage of more than a few minutes would still shut everything down.
--
Lindsay
On Mon, 17 Mar 2014 10:43:19 AM Fábio Rabelo wrote:
I planning the migration to 3.2 PVE version, and I am intent to use
the 3.10 Kernel version .
I never use OpenVZ, just KVM .
Any recommendation or regard ??
I'd be curious to - any big advantages to the 3.10 kernel over 2.6.32?
--
Which role gives maximum access to a VM o resource - PVEAdmin or Administrator?
--
Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
On Thu, 20 Mar 2014 06:10:30 AM Dietmar Maurer wrote:
Which role gives maximum access to a VM o resource - PVEAdmin or
Administrator?
Administrator
Thanks
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
This morning updated the kernel and BIOS on both servers and none of
the the user even noticed.
Gotta love live migration ...
Thanks for the great product guys!
--
Lindsay
___
pve-user mailing list
pve-user@pve.proxmox.com
On Fri, 20 Jun 2014 02:39:50 PM Gilberto Nunes wrote:
Can you point me some way to cript or protect this VM's?
Perhaps, create a cript layer or wathever will work...
Well you could encrypt sensitive data inside the container on.
Not overly familiar with containers, but I believe they use what
On Mon, 23 Jun 2014 01:51:15 PM Daejuan Jacobs wrote:
If you are purchasing a VPS, you are trusting the owner of the server with
your data. As by design, they have complete control over the hardware and
host operating system.
Good point
Of course, if you are root on the host OS, you
Tried it, same problem :(
I'll try disabling DEP and see if that helps. Has with other programs in the
past.
On 24 June 2014 15:22, Dietmar Maurer diet...@proxmox.com[1] wrote:
Its very peculiar - I have a windows 7 (64bit) VM that is our build server.
In general it runs
fine, but I
I changed my backup strategy recently and in every backup run over 50% of them
fail with:
vma_queue_write: write error - Broken pipe
There are 22 VM's, split over two nodes.
The Backup dest is a NFS share on the NAS
- QNAP TS-420
- 4 WD Reds on Raid 10
- 2 * GB Ethernet, bonded together
On Sat, 12 Jul 2014 12:19:24 AM Gilberto Nunes wrote:
I got the same error, but it was 'cause I running out of space on backup
storage...
3TB of free space on the NAS, so I don't think thats it :)
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
I have a weekly backup of all the VM's on a 2 node cluster to a NAS via NFS.
The backup of node 1 always succeeds.
However the backup of the last VM on Node 2 always fails (Node 2 has rather
more VM's than node 1). The error is storage 'LOB' is not online
I can then manually backup the VM via
On Fri, 8 Aug 2014 02:14:02 PM Gilberto Nunes wrote:
Hello friends...
I have 5 VM's that I backed up to an 1 TB hard driver...
Everything is gone ok until 5 backup, when I get this error:
Hows the drive mounted?
--
Lindsay
signature.asc
Description: This is a digitally signed message
On Mon, 11 Aug 2014 05:41:37 PM Gilberto Nunes wrote:
Well... The person that sent me the error, told me that mounted the drive,
with the ntfs-3g command...
But I don't think that this is the trouble
Anyway, she (yes! She! It's a woman... rsrsrs... ) has installed another
server with same
is not a trouble...
regarding USB 2 or 3, I do not know... I did ask to the person in question!
tomorrow! I swear!
=)
2014-08-11 18:58 GMT-03:00 Lindsay Mathieson lindsay.mathie...@gmail.com[2]:
On Mon, 11 Aug 2014 05:41:37 PM Gilberto Nunes wrote: Well... The person that
sent me
the error, told me
On Sat, 20 Sep 2014 10:29:22 AM Sten Aus wrote:
After upgrading to latest Proxmox 3.3, automatic backups for week failed
tonight. Have not yet managed to debug, but is it just me or anyone else?
Both node backups - quite large (9 hour 15 hour), succeeded for me.
--
Lindsay
signature.asc
On 20 September 2014 18:35, Sten Aus sten@eenet.ee wrote:
Yeah, it's me. NFS was mounted read-only for somehow.
Thanks for reply!
Heh! We've all been there :)
___
pve-user mailing list
pve-user@pve.proxmox.com
Currently running a bunch of windows server and desktop VM's on a 2 node
cluster with kernel 2.6.
I have no need or desire for containers.
Is there any big advantage to upgrading to Kernel 3.10? performance?
features?
Cheers,
--
Lindsay
___
pve-user
I'm playing around with GlusterFS and have a few questions. Have got a test
setup running on a 2 node cluster (using external USB Drives!) that works
surprisingly well :) and I must say it was very easy to setup.
1. I wanted to implement a distributed/replicated file system. Is GlusterFS
the
On Thu, 23 Oct 2014 04:32:54 AM admin-at-extremeshok-dot-com wrote:
If you still want to use glusterfs, Create 2 sets of replicated slaves with
6 brocks each. Each node has its own master and its slave is replicated to
the other server. This provides a backup. Reads are local, writes are
On Thu, 23 Oct 2014 09:38:13 AM Angel Docampo wrote:
I'm using successfully Proxmox with GlusterFS. Each proxmox node is also a
gluster node, and now I have two nodes (using a third proxmox node on
vmware for proxmox quorum).
Interesting, I never thought of that, and we actually have a
On Wed, 29 Oct 2014 09:06:44 AM Eneko Lacunza wrote:
haven't deployed glusterfs myself. I think you can put
CTs/ISOs/backups on glusterfs but not in ceph-rbd
Yes, I think its images only, not a problem for this exercise
(maybe yes on cephfs).
Is that done via a manual mount with proxmox
On Wed, 29 Oct 2014 09:25:08 AM Angel Docampo wrote:
Ceph provides block storage while Gluster doesn't, but the latest it's far
easier to setup. As block storage, Ceph is faster than Gluster, but I have
all my proxmox virtual environment with gluster running perfectly.
Limiting factor will be
On Wed, 29 Oct 2014 09:45:39 AM Dietmar Maurer wrote:
One big dis-advantage of glusterfs is behavior after node failure. Seems
glusterfs re-read and compare ALL data when the other node comes up again.
This produces much overhead and is very slow.
That is a big issue. I guess its an effect of
Have been doing a lot of testing with a three node/2 osd setup
- 3TB WD red drives (about 170MB/s write)
- 2 * 1GB Ethernet Bonded dedicated to the network filesystem
With glusterfs, individual VMs were getting up to 70 MB/s write performance.
Tests on the gluster mount gave 170 MB/s, the drive
On Sun, 2 Nov 2014 09:58:51 AM Dmitry Petuhov wrote:
02.11.2014 5:18, Lindsay Mathieson пишет:
Have been doing a lot of testing with a three node/2 osd setup
- 3TB WD red drives (about 170MB/s write)
- 2 * 1GB Ethernet Bonded dedicated to the network filesystem
2 OSD each node or only 2
Is it possible to install a osd on a partion rather than a raw device?
pveceph createosd /dev/sda2 fails with:
unable to get device info for 'sda2
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing
I have gluster replica setup with 2 nodes, one disk per node.
Sustained writes are good enough - 80MB/s, reads are 140/MB/s
But once I start 4+ windows VM's on it I have problems with iowait
sitting at 10%, peaking at 20%, this produces noticable pauses in the
VM's (VDI usage).
Its interesting
On 3 November 2014 18:10, Eneko Lacunza elacu...@binovo.es wrote:
Hi Lindsay,
Thanks for the informative reply Eneko, most helpful.
4 drives per server will be better, but using SSD for journals will help you
a lot, could even give you better performance than 4 osds per server. He had
for
On Wed, 5 Nov 2014 05:34:04 PM Eneko Lacunza wrote:
Overall, I seemed to get similar i/o to what I was getting with
gluster, when I implemented a SSD cache for it (EXT4 with SSD
Journal). However ceph seemed to cope better with high loads, with one
of my stress tests - starting 7 vm's
On Mon, 10 Nov 2014 12:19:29 AM Michael Rasmussen wrote:
I think -n size=8192 and inode64 is only useful if your storage size is
greater than can be address by 32 bit.
True. In my case, 3TB
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Mon, 10 Nov 2014 12:19:29 AM Michael Rasmussen wrote:
I think -n size=8192 and inode64 is only useful if your storage size is
greater than can be address by 32 bit. -n size=8192 will use more of
the available storage for metadata and inode64 consumes more RAM so if
this is not needed it
On Fri, 14 Nov 2014 11:14:22 AM rickytato rickytato wrote:
atop: 600% user cpu anche 600% guest cpu..
Are normal this value? Two VM have 7core (1socke/7core) and cpu host.
What OS are the VM's?
Have you looked at the CPU usage of processes in the VM's?
--
Lindsay
signature.asc
Description:
Thought I'd do a quick summary of my results - very subjective really, so take
with a pinch of salt.
As I suspected, raw disk becnhmarks, either from within the VM (Crystal
DiskMark) or on the host (dd, bonnie++) while interesting, arent' a very good
guide to actual application performance
I get this all the time, its been a continual problem with proxmox. Every
backup run has at least one, often many - it makes the backups useless.
I only run backups from one node at a tim, I've tried rate limiting, but it
doesn't seem to make a difference.
Destination is a NFS share on a QNAP
On Sat, 29 Nov 2014 07:02:55 AM Lindsay Mathieson wrote:
What can I do? what logs should I examine?
dmesg on the proxmox box doesn't seem to show anything.
dmesg on the nas has:
[190967.283021] nfsd: non-standard errno: -14
[304641.703023] nfsd: non-standard errno: -14
https
On Sat, 29 Nov 2014 06:26:11 AM Dietmar Maurer wrote:
Are interesting to, looks like more a problem with the NAS. Crappy piece
of junk optimised for windows.
I'll try with ftp fuse and/or a windows share.
Thanks for testing.
No worries, thanks for the great product :)
results:
Taking/Restoring live snapshots (test windows VM) is taking a
ridiculous amount of time - unusably so.
With images hosted on our NAS (NFS) or on gluster, snapshots take
around 30 seconds to take and/or restore.
With ceph:
- 9 *minutes* to take
- The restore has take *20 minutes* to get to 34%
nb. Starting the snapshot restore killed the running vm
On 2 December 2014 at 12:32, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
Taking/Restoring live snapshots (test windows VM) is taking a
ridiculous amount of time - unusably so.
With images hosted on our NAS (NFS) or on gluster
We run all our windows dev, test and production servers on our proxmox
servers, weekly onsite DR backups and monthly offsite DR backups.
And we just got hammered with a root kit virus that is proving extremely
difficult to remove.
I'm proposing that we restore one by one from last month DR
On Wed, 3 Dec 2014 09:21:06 AM Gilberto Nunes wrote:
So, if you want isolete any VM, remove NIC from it! Simple like that... Then
access the VM via NoVNC!
I need some way of getting antivirus tools on them.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Wed, 3 Dec 2014 12:51:35 PM Eneko Lacunza wrote:
Another thing you can do is to use NAT networking. That would allow
internet access.
Thanks Eneko, much appreciated
I've found a tool kit which can remove the rootkit - RougeKiller, does the job
when just about everything else couldn't eve
Is there any chance of getting the cephfs kernel module available in kernel
3.10? for giant?
Failing that, is it possible to build it myself?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
On Tue, 16 Dec 2014 02:27:28 PM Dietmar Maurer wrote:
AFAIK that code is not stable. But I will try to compile and include it for
testing.
Thanks, appreciated.
I've done testing with ceph-fuse for VM hosting and it works, but performance
is pretty bad. It would be interesting to see if the
Which one for use over distribute block systems (e.g ceph/rbd).
I was thinking OCFS2 but I saw msgs on the forum that OCFS2 is not
longer supported and removed from the kernel.
However it still seems to be therein the repos.
--
Lindsay
___
pve-user
On Wed, 17 Dec 2014 09:47:22 AM Dietmar Maurer wrote:
Uploaded to pvetest - please report test results back to this list.
http://download.proxmox.com/debian/dists/wheezy/pvetest/binary-amd64/
Thanks, but I'm only seeing old firefly versions there (0.80.6), whereas I am
using giant (0.87)
--
On Wed, 17 Dec 2014 08:24:54 PM Lindsay Mathieson wrote:
On Wed, 17 Dec 2014 09:47:22 AM Dietmar Maurer wrote:
Uploaded to pvetest - please report test results back to this list.
http://download.proxmox.com/debian/dists/wheezy/pvetest/binary-amd64/
Thanks, but I'm only seeing old firefly
On Wed, 17 Dec 2014 09:47:22 AM Dietmar Maurer wrote:
Uploaded to pvetest - please report test results back to this list.
Quick notes, haven't tested with vm's yet:
mount reports:
mount: error writing /etc/mtab: Invalid argument
However the mount succeeds and there is an entry in mtab
Also its not mounting on boot, nor is fuse either for that matter. The mons
mds are on two other nodes, so they are availble when this node is booting.
The can be mounted manually after boot.
my fstab:
id=admin /mnt/cephfs fuse.ceph defaults,nonempty,_netdev 0 0
On Wed, 17 Dec 2014 01:07:59 PM Dietmar Maurer wrote:
Maybe it is better to report those errors on the ceph lists?
yah.
What sort of test results were you wanting?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
On Wed, 17 Dec 2014 01:57:48 PM Dietmar Maurer wrote:
What sort of test results were you wanting?
Oh, my primary interest was the performance result. You already
posted that, thanks.
Furthur to that - I've managed to reliably produce data corruption using the
3.10 module. Feed back from
Oddball one - I have set of VM's set to start in order on node boot. The 2nd
one always fails to start, with a timeout error. You can then manually start
it from the gui or cmd line with no problems.
Unfortunately its the primary Active Directory server, so its rather important
it start after
On Fri, 26 Dec 2014 09:11:09 AM Lindsay Mathieson wrote:
I think I've deciphered whats causing this - ceph.conf the various keys
resides on /etc/pve, which is a fuse mount itself and is not available when
cephfs is being mounted.
Not sure what to do next though.
I created the following
On Sat, 27 Dec 2014 12:25:26 AM Laurent Dumont wrote:
Live migration would be nice but it's not critical for now. I assume that
switching storage model down the road is still a possibility.
Yup, easy done. IMO the easiest way is to add a shared storage using a NAS NFS
share.
More fun to
As per the subject :)
I was experimenting with setting up a SSD only pool as a prelim to setting up
a cache tier.
I added the osd to ceph.conf
[osd.2]
host = vnb
And added it to the crush map with host=vnb-sdd to stop it getting added to
the default ruleset.
ceph osd crush add osd.2
On Wed, 18 Feb 2015 03:45:16 PM you wrote:
After the latest grub updates I got this warning again, despite having
installed grub to /dev/sda earlier.
grub-install: warning: File system `ext2' doesn't support embedding.
grub-install: warning: Embedding is not possible. GRUB can
On Tue, 17 Feb 2015 12:12:15 PM Friedrich Ramberger wrote:
Time-delay for start in Options - Start/Shutdown order - Startup
delay
Doesn't that delay apply to the *next* vm due to start?
___
pve-user mailing list
pve-user@pve.proxmox.com
How does one activate the guest agent? didn't even know it was there.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
On Fri, 20 Feb 2015 11:11:59 AM Steffen Wagner wrote:
Haha, me too.. What does this agent have for features??
Thanks Steffen
Figured out how to activate it:
qm set VMID -agent 1
I'm guessing you need the spice guest agent installed on the vm (windows or
linux). Proxmox can then issue
On Wed, 28 Jan 2015 02:40:44 PM Martin Maurer wrote:
thanks for feeback - already tried 0.100 virtio?
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
any reason why you used 0.94 instead of latest?
Because I didn't know about it! redhat are very quiet about these releases
On Wed, 28 Jan 2015 02:40:44 PM Martin Maurer wrote:
thanks for feeback - already tried 0.100 virtio?
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/
p.s Thanks for the heads up.
Do you know where the release notes for it are?
___
Part of the problem might be that the restore didn't create a sparse file. The
.raw file created on cephfs has a du of 128GB instead of the 60GB it had when
backed up.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
I have my ceph cluster setup - 6 osd's on 2 ndoes with ssd journals. Its fast
enough for my requirements - 140 MBS/s seq write, 500 MB/s seq read, IOPS are
reasonable to.
I have 20 VM's running ver rbd, they perform quite well. responsive desktops
and servers, no complaints there.
Where it
On Mon, 9 Feb 2015 03:12:32 PM Eneko Lacunza wrote:
Is anyone using virtio drivers .94 and/or .100?
I'm using .100 - no issues I've detected so far. Even with Win 10 :)
___
pve-user mailing list
pve-user@pve.proxmox.com
How accurate/generalised is this? for my VNG node the WebUI shows 24GB used
out of 32GB.
Whereas atop on that node shows only 350 *MB* free and my swap is getting
thrashed, which makes sense because vm performance is in the toilet :)
regardless we will upgrade to 64GB of ram :)
--
Lindsay
Latest updated update grub - it seems to have installed ok and the server
rebooted
fine (3.10 kernel).
But I got this message while updating:
/Replacing config file /etc/default/grub with new version/
/Installing for i386-pc platform./
/Installation finished. No error reported./
/Installing
On 15 February 2015 at 18:15, Dietmar Maurer diet...@proxmox.com wrote:
Sure. I guess we changed those default over the time.
Besides, does it work now if you install grub directly on /dev/sda?
Yes it does, did:
grub-install /dev/sda
update-grub
No warnings in the update andit rebooted
1 - 100 of 386 matches
Mail list logo