Have you defined the ip of your router as a default gateway on the vm's
network configuration?
Is there a chance that a firewall is enabled on this vm? Can you ping the
ip of the vm from your router?
On Aug 27, 2013 7:45 PM, "Keith Clark"
wrote:
> On 13-08-27 12:37 PM, Marco Gabriel - inett GmbH
Hello list,
I have a single node running a single vm (win2k3).
It was working fine on Proxmox 2.3.I upgraded this box to 3.1 last week by
following:
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0.
All went smooth, except that backup is not completed for some reason.
I have a local nfs mount o
I did some tests on 2 different NFS,SMB shares and I had the same very slow
backup performance.
Then I reverted from *pve-kernel-2.6.32-23-pve *to *pve-kernel-2.6.32-20-pve
*and the backup
performance got back to normal speed.
Yannis Milios
--
Systems Administrator
Mob. 0030 6932
Hello
Can anyone please suggest how to make spice client (remote viewer) work on
Ubuntu 12.04 ?
Tried to compile version 0.5.7 from source without success.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo
>From forum:
Edit the VM config /etc/qemu-server/.conf, and directly assign the
device. For example:
ide0: /dev/sdb2
Yannis Milios
--
Systems Administrator
Mob. 0030 6932-657-029
Tel. 0030 211-800-1230
E-mail. yannis.mil...@gmail.com
On Wed, Dec 4, 2013 at 12:43
maybe this can help you:
http://forum.proxmox.com/threads/7674-Restore-Proxmox-KVM-image-to-another-Proxmox-server
Yannis Milios
--
Systems Administrator
Mob. 0030 6932-657-029
Tel. 0030 211-800-1230
E-mail. yannis.mil...@gmail.com
On Wed, Dec 4, 2013 at 12:43 AM, Richard
Hello
I have followed these guides to build a two node ha drbd cluster + iscsi,
but not for production use.
Link1 <http://wiki.skytech.dk/images/4/44/Ha-iscsi.pdf> Link2
Openfiler is discontinued.
Yannis Milios
--
Systems Administrator
Mob. 0030 6932-657-029
Tel. 0030 2
http://pve.proxmox.com/wiki/Live_Snapshots
http://pve.proxmox.com/wiki/Storage_Model
On Mar 28, 2014 2:43 PM, "Ikenna Okpala" wrote:
> Hi,
> Is it possible to take snapshots of the proxmox host ?
> Can you please share technics that they exist for this.
>
> Regards
>
> --
> Ikenna
>
> __
hello
Yes, I had the same problem and asked the forum.It is a known issue with
Spice.As a workaround try to close remote-viewer before live migrating vm
or try to use a different display adapter (vmware or default) and see what
happens.
On Jul 4, 2014 9:16 PM, "Gilberto Nunes" wrote:
> I try it
Yes, you can do it. I'm using almost the same setup on my server with
Debian and Proxmox repos as described in the already mentioned wiki.
> I have created linux raid arrays on the first set for o/s,images,backups
and another array on ssds for images only (lvm).
>
> On Jul 7, 2014 12:36 PM, "Diaoli
I tried the same config on a lab but does not work.
I think its not possible to have HA without proper fencing.
Im
On Jul 7, 2014 12:40 PM, "Angel Docampo" wrote:
> What do you mean by "it works"?
> Without fence mechanism, the cluster won't be aware there a node down, and
> then you can continu
If you use iscsi + lvm combination you can use lvm snapshots.
Otherwise if your vm is in qcow2 format you can use qcow2 snapshots.
https://pve.proxmox.com/wiki/Live_Snapshots
Yannis Milios
--
Systems Administrator
Mob. 0030 6932-657-029
Tel. 0030 211-800-1230
E-mail
It has not to do with your ubuntu storage.
Ubuntu just exposes block level storage via iscsi to the proxmox nodes.
So it is a matter of proxmox to do the snapshots.However that depends on
how you will "format" this block level storage.
If you "format" it to lvm then proxmox can do lvm snapshots on
No, they do not.
Follow these steps from the wiki:
LVM Groups with Network Backing
In this configuration, network block devices (iSCSI targets) are used as
the physical volumes for LVM logical volume storage. This is a two step
procedure and can be fully configured via the web interface.
1. F
> >> I cannot change disk format...
That is normal.On lvm volume you can only use raw format.
However this does not prevent you from doing lvm based snapshots.
Did you create that vm and still does not show you Snapshot option??
___
pve-user mailing list
>So we are confusing "LVM Snapshots" with "Live snapshots" don't you think?
*Yes, of course that is the case.. :)*
>My doubt is if I deploy my software storage under Ubuntu or Debian or even
in CentOS, whatever SO, >if I get the same behavior like real storage.
*No, iscsi target on Ubuntu or wh
It looks like it has a technology called IBM Flashcopy which I think is
similar to snapshots.
It supports up to 64 targets on built in license wich can be upgraded up to
2040 targets.
Check the link below:
http://www-03.ibm.com/systems/storage/disk/storwize_v3700/features.html
On Jul 20, 2014 2:33
As long as it supports only iscsi and not nfs then snapshots through
proxmox I think would not be possible.
If it had NFS support then you could live snapshot by using qcow2 format
vms.
On Jul 20, 2014 5:37 PM, "Gilberto Nunes"
wrote:
> Thanks for all answers, but I meant make snapshot through Pr
ike Ceph’s RADOS Block Device (RBD)).
Future version will support also Sheepdog or Nexenta
On Jul 20, 2014 7:30 PM, "Stefan Sänger" wrote:
> On 20.07.2014 16:48, Yannis Milios wrote:
> > As long as it supports only iscsi and not nfs then snapshots through
> > proxmox I
hello
Can anyone suggest a cheap pdu device (except from apc) to be used as fence
device for a test two node cluster (+qdisk)?
Switch and ipmi are not an option.
thank you
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bi
to
virtio disk.Finally remove the second small virtio disk.
This way it should work.
Yannis Milios
Systems Administrator
Mob. +30 6932-657-029
Tel. +30 211-800-1230
E-mail. yannis.mil...@gmail.com
On Tue, Aug 12, 2014 at 3:41 PM, Gilberto Nunes
wrote:
> Hi
>
why don't you try to boot from win2k8 dvd and try to fix mbr or try to see
if data is there?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
maybe this can help? http://goo.gl/9vRy0Y
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Your reported disk size: 319G
and virtual size:160G
So total disk size is 319G and now occupied 160G.Seems normal. du reports
the actual size and ls reports the max disk size.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-b
To Proxmox devs: Please at the forum at tis section because it looks like
someone hijacked it:
forum.proxmox.com/forums/13-What-Virtual-Appliances-do-you-want-to-see
Sent by mobile
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.co
it was before...but now it seems that somebody fixed it.
On Sep 9, 2014 9:01 PM, "sebast...@debianfan.de"
wrote:
> Which thread in the forum do you mean?
>
>
>
> ***
> http://www.lkg-nw.de
> Evangelische Gemeinschaft Neu Wulmstorf
>
>
hello,
Never tried online storage migration but what happens if you do the
following:
1. create a windows vm in raw format, on local disk storage(not nfs mount).
2. Start installing updates on windows and initiate online storage
migration to the nfs mount in qcow2 format.
Do you experience the
VMA is the new backup format since PVE 2.3.It is uncompressed and it should
reflect the actual data occupied inside your vm (not raw file size). Gzip
and Lzo are used for compression of this file.
http://pve.proxmox.com/wiki/VMA
On Thu, Nov 13, 2014 at 5:57 PM, Chris Murray
wrote:
> Hi,
>
>
Hello
Can't answer how does dhcp server on nat works but If you don't need
internet access why you just not create a "dummy" vmbr interface on
/etc/network/interfaces and connect your vms on that?
It's the same with "host only" networking on vmware.In that way you can
setup an internal dhcp server
>
> >Now I am able to create live snapshots as many as I want, as frequently
> as I want and I can send >incremental backups over the network. In this
> case the on-site backup is "free" (in time and in >storage), remote backup
> is quick (just send the difference between snapshots over the network
Dietmar is right.The purpose of installer is just to install O/S.
Particulary in ZFS case there are some many parameters you have to consider
before creating your first data pool that simply the installer cannot
include them all.
Install proxmox on your first ssd disk and then create your data pool
First try to shrink fs+partition+lvm inside guest vm with gparted as
Lindsay suggested.
Reboot the VM to check if resize completed successfully.
Check actual fs+partition+lvm sizes inside VM and verify their total size.
Shutdown VM and try to resize raw disk so that the total size is bigger at
le
Hi,
I remember that I had issues with virtio on previous pfsense versions.
Specifically the traffic shaping was not working correctly but on the
latest versions it has been corrected.
The best place to ask though is on pfsense forum or mailing list.
On 11/09/2015 07:27 PM, Luis G. Coralle
Hi Muhammad,
This should clear up things a bit (taken from Oracle ZFS Admin guide):
"A clone is a writable volume or file system whose initial contents are
the same as the dataset from which it was created. As with snapshots,
creating a clone is nearly instantaneous and initially consumes no
What about using virtualized OmniOS on that storage node with hdds
passthru-ed to the OmniOS VM. Could that be an option?
I presume that this will overcome your nic driver issue.
Regards,
On Tuesday, 8 March 2016, Mikhail wrote:
> Answering to myself - it looks like I need IET iscsi provider t
What about using virtualized OmniOS on that storage node with hdds
passthru-ed to the OmniOS VM. Could that be an option?
I presume that this will overcome your nic driver issue.
Regards,
On Tuesday, 8 March 2016, Mikhail wrote:
> Answering to myself - it looks like I need IET iscsi provider t
Hi Ralf,
Are both hard drives exactly the same model?
I've noticed that your drive uses 512byte sector size (instead of 4K
sector size which is the current trend).
In that case, is your pool properly aligned to ashift=9 ?
More info here:
http://wiki.illumos.org/display/illumos/ZFS+and+Advan
If your backup target is a NFS server, try to mount it in vers=3 instead of
4.
On Monday, 12 September 2016, Miguel González
wrote:
> Hi,
>
> I have a software RAID of 2 Tb HDs. I have upgraded from 3.4 to 4.2
> and migrated the VMs that I had.
>
> I have realized backups are taking three
Can't answer directly your question since I'm not aware of PVE backup
internals however as a workaround you could try to:
If your zfs volumes (where vms reside) have compression enabled, you could
reclaim unused space by:
- running sdelete on windows vms
- creating a zero filled file via dd on lin
Do you use a single disk or a raid array as a backup storage?
What's the output of: 'cat /proc/mounts | grep ext4' ?
On Tue, Sep 13, 2016 at 9:35 AM, miguel gonzalez wrote:
> Sorry i forgot. I use local storage and ext4 for the vms and backups.
>
> Before in 3.4 I had ext3.
>
> Many thanks
This process is described in detail here:
On Wed, Sep 21, 2016 at 12:29 PM, Thomas Lamprecht
wrote:
>
> On 09/21/2016 12:35 PM, Bart Lageweg | Bizway wrote:
>
>> Thanks. It's working!
>> Only it is still existing in the webinterface (already restart
>> pve-cluster and moved /etc/pve/nodes/noden
Sorry forgot the wiki link :)
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node
On Wed, Sep 21, 2016 at 1:11 PM, Yannis Milios
wrote:
> This process is described in detail here:
>
>
> On Wed, Sep 21, 2016 at 12:29 PM, Thomas Lamprecht <
> t.lam
>>...but there is one deal breaker for us and thats snapshots - they are
incredibly >> slow to restore.
You can try to clone instead of rolling back an image to the snapshot. It's
much faster and the recommended method by official Ceph documentation.
http://docs.ceph.com/docs/jewel/rbd/rbd-snapsh
Hi,
Sorry if this has been already answered, but is there a way to preserve
custom options (args:) in vm.conf file?
They seem to disappear after making any change via the webgui.
Thanks
--
Sent from Gmail Mobile
___
pve-user mailing list
pve-user@pv
Not sure if it's related, but after upgrading yesterday to the latest
updates, Ceph snapshots take a very long time to complete and finally they
fail.
This happens only if the VM is running and if I check the 'include RAM' box
in snapshot window. All 3 pve/ceph nodes are upgraded to the latest upda
gt; On Fri, Nov 11, 2016 at 12:11:27PM +0000, Yannis Milios wrote:
> > Not sure if it's related, but after upgrading yesterday to the latest
> > updates, Ceph snapshots take a very long time to complete and finally
> they
> > fail.
> > This happens only if the VM is run
I would use plain dd or clonezilla to backup. Then restore to vm and adjust
partitions/vdisks as needed by using gparted.
On Wednesday, 16 November 2016, Marco Gaiarin wrote:
>
> I need to P2V a debian 8 server, installed on UEFI/GPT.
>
> A little complication born by the fact that i need to P
Regarding drbd, is it possible to include drbd8 kernel module + userland
utilities instead which are not affected by the license change?
On Mon, 21 Nov 2016 at 06:20, Alexandre DERUMIER
wrote:
> >>Is this an existing feature in qemu or still under development? (or
> >>planning)
>
> qemu already
Hello,
I'm sorry if this has been asked again in the past, but may I ask if PVE
SPICE implementation supports smartcard passthrough and if yes how can be
enabled?
For usb card readers usb redirection works but for built in ones (laptops)
smartcard protocol redirection is needed.
I'm currently ev
How about using VMware Converter for P2V to vmdk file(s) and then attach
the vmdk(s) to PVE ?
or by using 'SSH Migration of a Windows physical machine to a VM raw file
directly' described in the WiKi ?
(
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#SSH_Migration_of_a_Windows_ph
Just tested it by doing P2V with vmware converter a Win10 laptop with
UEFI+GPT+Secure boot enabled, worked fine. However didn't test it with
Dynamic disks/RAID1 setup..
On Tue, 17 Jan 2017 at 16:06, Yannis Milios wrote:
> How about using VMware Converter for P2V to vmdk file(s)
Your question is quite generic because it depends how you have configured
your PVE, what will be the storage backends etc.
I will assume that you have one PVE server with VMs stored in a local
storage like LVM or ZFS. You have a NAS as well where you store the backups
of these VMs (full backups).NA
Hello,
Maybe these two links can help?
https://pve.proxmox.com/wiki/Qemu-guest-agent
https://pve.proxmox.com/wiki/Acpi_kvm
Yannis
On Mon, 30 Jan 2017 at 23:47, Leonardo Dourado <
leonardo.dour...@itrace.com.br> wrote:
> Hello guys!
>
> I am trying to run a programmed backup (STOP Mode) and
In my opinion this is related to difficulties in cluster communication.Have
a look these notes:
https://pve.proxmox.com/wiki/Multicast_notes
On Fri, 24 Feb 2017 at 22:45, Uwe Sauter wrote:
> Hi,
>
> no I didn't think about that.
>
> I now tried and restarted pveproxy afterwards but to no avai
>> So if you allocate 100GB to a vm disk, and write a 99GB at some time,
>> then delete 98GB, the vm-disk will still report 99GB of allocated space
>> AFAIK
Is it possible to reclaim this space by using virtio scsi as a virtual
storage controller on the vm and then enabling discard option?
Do we n
You just need to edit the config file of the vm and edit the line(s)
related to the vm disk from scsiX to ideX.
Posting vm config would help...
Generally speaking Windows are a bit painful to virtualise.
On Fri, 24 Mar 2017 at 18:48, wrote:
> So, i originally set these up as SCSI drives in P
Hello
I found this from a previous post, not sure if the issue has been solved
now:
http://pve.proxmox.com/pipermail/pve-user/2016-November/167784.html
Yannis
On Tue, 18 Apr 2017 at 09:55, Tobias Kropf wrote:
> Hi @ list
>
> We have a problem with our pve-ovs setup...
>
> The setup is based
>
>
> We can see that the vm's get replicated but we have no clue on which nodes
> is primary/secondary.
>
The resource is in Primary mode on the node where the actual VM is running
on. The rest of replicated resources are(should be) in Secondary mode.
The only time where a resource is in Primary
>
> >> Personally I'd go with zfs over btrf.
>
>> Interesting. I see that also with zfs, you can expose previous versions
via samba.
>> You prefer zfs, because..? (The "more mature" argument, or other reasons
as well..? perhaps specific to running on Qemu VM on ceph >> storage?)
I would go for ZF
>> (storage server has 4x4TB SAS
>> drives in RAID10 configured with MDADM)
Have you checked if these drives are properly aligned, sometimes that can
cause low r/w performance.
Is there any particular reason you use mdadm instead of h/w raid controller?
Yannis
Hello,
>> RAIDZ1 (2 Disks) -> qemu -> ZFS (1 Disk)
Is there any particular reason of having this kind of setup? I mean in
general using ZFS inside a VM is not recommended.
>> NAMESTATE READ WRITE CKSUM
>> backup ONLINE 0 0 * 13*
>> s
>
>
>> 2 reason for this, first having checksums^^, second snapshots.
> And I prefer ZFS over any other filesystem.
>
>
Whats the reason why ZFS is not good in a VM?
IMHO that's a waste of system resources. Since your VM disk already lies on
a ZFS filesystem, where it can leverage all features yo
Since there were no config changes, I would have a look on cluster
communication, i.e switch issues ?
On Fri, Aug 11, 2017 at 11:02 AM, Chris Tomkins
wrote:
> Hi Proxmox users,
>
> I have a 4 node cluster. It has been in production for a few months with
> few/no issues.
>
> This morning one o
Have you tried to change the VM scsi controller so something different than
LSI? Does that help?
Yannis
On Tue, 15 Aug 2017 at 08:02, Bill Arlofski wrote:
>
> Hello everyone.
>
> I am not sure this is the right place to ask, but I am also not sure where
> to
> start, so this list seemed like a
My understanding is that in pvesr the live migration of guest vm is not
supported:
"Virtual guest with active replication cannot currently use online
migration. Offline migration is supported in general"
On Fri, 25 Aug 2017 at 16:48, Fábio Rabelo
wrote:
> Sorry my knowledge do not go beyon
>
> /etc/hosts.conf has some effect over cluster and migration??
>
Yes, by default pve will use the ip addresses specified there for cluster,
vm live migration and management.
> I meant, if I have two nic, in two servers, one nic to administrative
> access and the second, working as a private ne
>
> VM in subnetwork1(resp.2) on host1 must be communicate with VM in
> subnetwork1(resp.2) on host2 via just one single interface and my host must
> be not reacheable by subnetwork.
>
> How I can make this ?
>
>
>
Isolation at layer 2 can be achieved either by using 2 separate physical
network car
Here is an example for vm102 on rbdpool1:
root@pve3:~# rbd du -p rbdpool1 vm-102-disk-1
warning: fast-diff map is not enabled for vm-102-disk-1. operation may be
slow.
NAME PROVISIONED USED
vm-102-disk-1@snap1 12288M 11852M
vm-102-disk-1@snap2
No, I haven't experienced this issue on my setup. Can you post your
/etc/network/interfaces file and package versions (pveversion -v) ?
On Mon, Sep 25, 2017 at 11:41 AM, Jean-mathieu CHANTREIN <
jean-mathieu.chantr...@univ-angers.fr> wrote:
> Hello.
>
> - Mail original -
>
> But it's annoying that Proxmox blocks the VMs and that we can't start them
> after that, without disabling them and enable them manually.
>
>
Looks like it's designed to work like that, but still may be some room for
fine tuning:
https://pve.proxmox.com/wiki/High_Availability
and more specifi
>> node pve already defined
To add additional nodes to the cluster, you need to run 'pvecm add
*on* the 2nd and 3rd node.
For example let's assume you have 3 nodes (pve1,pve2,pve3).
To create the cluster:
---
on pve1:
pvecm create pvecluster1
To add additional nodes:
--
>
>
> I am having an issue as to when I add iscsi storage to the mix, all
> of the buttons are greyed out.
Have you checked this first ? https://pve.proxmox.com/wiki/Storage:_iSCSI
Are you planning to use direct mode (separate iscsi lun for each vm) or
lvm (content mode) ? I guess the se
>> Seems to me that redirect USB in the same way Linux spice client does...
>> Is that correct???
Yes, that's the Windows implementation for SPICE usb redirection.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman
ilberto.nunes36
>
>
>
>
> 2018-01-03 10:26 GMT-02:00 Yannis Milios :
>
> > >> Seems to me that redirect USB in the same way Linux spice client
> does...
> > >> Is that correct???
> >
> > Yes, that's the Windows implementation for SPICE us
>This is a hardware RAID controller, >but disks are not configured as RAID.
ZFS doesn’t like RAID controllers even when they are not configured in raid
mode.
If you really want to use ZFS with it's power, get a proper HBA card (or a
RAID controller that can be flashed with IT - Initiator-target -
According to wiki [1], it is tested (and supported) to add pve5.x nodes to
pve4.x cluster.
Perhaps you will have to upgrade the existing cluster first to the latest
version (4.4) , then proceed to the addition of the pve5.2
node(s).Moreover it is also stated, that this should be used as an
interme
>>Have plugged the token into the VM Host and used Add hardware to pass through
>>the USB
>>token to the VM
I'm using something similar (usb smart card reader/pki card for user
authentication), but in my case I decided that perhaps it's better to
connect the USB token to the client machine rather
> Yes I realise it is, what I'm saying is should it also be doing those
> steps?
Usually you don't have to, but as things often can go wrong you *may* have
to do things manually sometimes.
GUI is great and saves lots of work, however knowing how to manually solve
problems when they arise via the
Well, that’s basic linux sysadmin, nothing related to pve ...
You can find dozens of articles by googling. If you feel lazy to search,
here’s some examples ...
https://access.redhat.com/articles/1190213
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_admini
If both VMs have their vnic attached to vmbr1, then it might be also worth
having a look at the firewall on the o/s inside the VM. If that's enabled
on both sides, it could potentially block ping requests...
On Friday, August 3, 2018, Josh Knight wrote:
> for vm1 and vm2, are the LAN interface
>
> (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
>> > /dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
>> > and then a LV (lvcreate -L100% pve/data)
>>
>>
Try the above as it was suggested to you ...
> >But I suspect I have no space to create an
>> >additional
Both drbd8 and drbd9 can be configured in pve without issues. The former,
can be configured in pairs of two pve nodes, where for the latter, three
pve nodes is the minimum.IIRC drbd8 kmod is included by default in the pve
kernel, so you will only need to install drbd-utils to get started.
For drbd9
Did you interrupt the boot process on the VM by pressing ESC, in order to
select the DVD drive as the boot device ?
On Tue, 28 Aug 2018 at 09:09, lists wrote:
> Hi,
>
> I am trying to move a physical windows 2008 enterprise uefi installation
> (Version 6.0.6002 Service Pack 2 Build 6002) into
> yes :-) ("press any key to boot the windows dvd" or something like that)
>
> The "windows is loading files..." progress bar appears, and goes full,
> and then it just stops progressing. (and I waited many hours)
>
Have you tried to boot from the same ISO from BIOS legacy mode ? If you
have same
Can’t comment on the I/O issues, but in regards to the snapshot rollback, I
would personally prefer to clone the snapshot instead of rolling back. It
has been proven for me much faster to recover in emergencies.
Then, after recovering, to release the clone from the its snapshot
reference, you can f
This seems a good reading as well...
https://ceph.com/geen-categorie/ceph-osd-reweight/
On Fri, 31 Aug 2018 at 12:10, Eneko Lacunza wrote:
> You can do so from CLI:
>
> ceph osd crush reweight osd.N
>
>
> https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-rew
If both VMs fail with a BSOD, then definitely something must be wrong
somewhere.
Win2016 is supported in PVE 5+, so don't think it's necessary to upgrade to
a newer version.
I would focus my attention on any potential hardware issues on the actual
host (RAM,Storage etc).
What's your underlying sto
5]: successful auth for
> user 'root@pam'
> Sep 4 15:57:20 pve2 pvedaemon[4545]: successful auth for
> user 'root@pam'
> Sep 4 15:57:20 pve2 pvedaemon[931]: successful auth for user
> 'root@pam'
> Sep 4 15:57:40 pve2 pvedaemon[4545]: starting task
&g
Another option would be going cheap and adding something like this as a 3rd
node ...
https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node
On Fri, 28 Sep 2018 at 19:03, Mark Adams wrote:
> If you have to stick with 2 servers, personally I would go for zfs as your
> storage. Storage replicatio
The previous two posts provided you already with enough tips (including a
link to the wiki) on how to troubleshoot your situation.
It’s now up to you to give some effort in reading carefully what is being
said there in order first to understand and then troubleshoot the problem.
In my opinion (an
We don’t have any client machine in the server room, so
> when we fix something in the room (cables, routing, etc…), we need to
> go out and check the VMs on another machine outside the room,
> sometimes making us come back, etc…
>
Is it really that difficult to get a laptop in the server room to
That's up to you to decide, PVE supports both hyper-converged setups (where
compute and storage nodes share the same hardware) and scenarios where
compute/storage nodes are separate.
You can choose for example to have 3 nodes in a PVE cluster, acting as
compute nodes and 3 separate nodes for the Ce
A few things I would try are ...
- Clear browser cache.
- Check if installed package versions are the same on *all* nodes
(pveversion -v).
- Restart pveproxy service on pve2 (or any other related service).
- Check the logs of pve2 when you try accessing it for any clues.
Do you get the same probl
Since it is a Linux installation, you could try to backup the system via a
livecd with fsarchiver to an external drive, then restore it to the virtual
disk.
http://www.fsarchiver.org/
Yannis
On Wed, 6 Feb 2019 at 10:18, Gilberto Nunes
wrote:
> Hi list
>
> I have here a VM with has directly att
Yes, it is possible...
https://pve.proxmox.com/pve-docs/chapter-pvesr.html
On Thu, 14 Mar 2019 at 11:19, Fabrizio Cuseo wrote:
> Hello.
> I have a customer with a small cluster, 2 servers (different models).
>
> I would like to replicate VMs from host A to host B, but from local-zfs
> (host A
8/24/2012 11:04 AM, Yannis Milios wrote:
> > The underlying block device (/dev/sdb) is a hardware raid-5 array on
> > both nodes (total 500gb).
> > I tried to expand this device by adding additional hdd on each node (now
> > 750gb).
> > After that giving fdisk -l /dev/
Hello
We have a small office with 25 clients and 4 servers with a mssql db,
domain controllers, exchange 03 and ms file server.
Is it possible to migrate them to a two node cluster with drbd?
We need HA on these roles but I do not know how will specific services like
sql or exchange will react on
I can see many posts on proxmox forums, from users using MS products on
their proxmox infrastructure.
I don't want to utilize Hyper-V for 2 reasons:
1. It requires 3 machines + 1 storage for cluster setup. (One is DC).
2. I really don't like Hyper-V's performance but as you said may be it
offers b
Hello all!
I did a p2v using a Acronis True Image (.tib) of a Win2000 Server physical
server by creating a VM on ProxMox 2.1
and using the normal restore procedure with Acronis Boot CD.
The restore went fine and the vm starts until the Starting Windows 2000
screen but does not proceed further from
Hello all,
I have followed this guide (
http://www.linbit.com/fileadmin/tech-guides/ha-iscsi.pdf) to build a two
node active/passive storage cluster by using drbd,pacemaker,iet iscsi.The
cluster works correctly and I can connect to the iscsi target and the two
luns (lun0,lun1) inside it from a win
1 - 100 of 103 matches
Mail list logo