Il 30/08/2017 08:32, Eneko Lacunza ha scritto:
> Hi,
>
> El 29/08/17 a las 19:41, Petric Frank escribió:
>>
>>> Is it possible to configure a proxmox cluster behind a single public IP
>>> address ? If possible, how do I configure my nodes at the time of
>>> installation ? I don't see this
I have had a problem in a cluster with configured a gluster fs.
I manually migrated the VMs to another server, though when I started it
it gave me the following:
kvm: -drive
Hi all,
I have had some troubles with 1 server in a cluster.
As the VM have all disks on shared storage I though it would have been
possible to migrate them from the current out of order server to the
others in the cluster.
HA is not enabled (I prefer doing it manual)
Though the GUI gave errors
Il 12/04/2017 07:50, Sten Aus ha scritto:
> I can confirm that we've succesfully moved from iSCSI 10G (HP EVA
> storage) to Ceph (10G) by doing move disk from Proxmox GUI.
>
> Haven't encountered any problems yet. :)
>
> On 08.03.17 1:36, Kevin Lemonnier wrote:
>>> Has anyone used PVE "move disk"
Il 11/04/2017 21:51, Jeff Palmer ha scritto:
> By default, in a 4 node cluster, 3 members would need to agree for quorum,
> not 2. In your example, if 2 hosts split from the other 2 hosts, neither
> side will have quorum. No split brain in that scenario.
>
> Think of quorum as a 'majority wins'
Hi all,
I have a question about cluster quorum (proxmox 4).
I currently have a 3 host cluster with shared storage (gluster).
I have an old machine which I could use as a backup/service in case of
some host failure, though adding this server to the cluster would make
the cluter composed of 4
Il 05/04/2017 12:07, Guillaume ha scritto:
> Le 05/04/2017 à 11:49, Alessandro Briosi a écrit :
>> Il 05/04/2017 11:19, Guillaume ha scritto:
>>> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>&
Il 05/04/2017 12:07, Guillaume ha scritto:
> Le 05/04/2017 à 11:49, Alessandro Briosi a écrit :
>> Il 05/04/2017 11:19, Guillaume ha scritto:
>>> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>&
Il 05/04/2017 11:19, Guillaume ha scritto:
> Le 05/04/2017 à 11:01, Guillaume a écrit :
>>
>> Le 04/04/2017 à 23:28, Michael Rasmussen a écrit :
>>> On Tue, 4 Apr 2017 22:48:54 +0200
>>> Guillaume wrote:
>>>
The vrack system already took care of that, that's why i
Il 24/03/2017 20:19, Yannis Milios ha scritto:
> You just need to edit the config file of the vm and edit the line(s)
> related to the vm disk from scsiX to ideX.
>
> Posting vm config would help...
> Generally speaking Windows are a bit painful to virtualise.
>
>
> On Fri, 24 Mar 2017 at 18:48,
Il 24/03/2017 13:57, Hexis ha scritto:
> I recently did a P2V onto LVM thin from a Windows 2003 server running
> on an old HP Proliant with a HP 6400 U-SCSI RAID. I used dd | ssh to
> accomplish it, and then of=the actual lvm volume. This completed
> successfully, however, I cannot seem to get
Il 06/03/2017 16:17, Marco Gaiarin ha scritto:
> nteresting. But how can i red this data:
>
> root@magneto:~# lvs
> LVVG Attr LSizePool Origin Data% Meta% Move Log
> Cpy%Sync Convert
> data pve twi-aotz-- 783.23g 50.38 25.36
>
Hi all,
I have had a strange behavior yesterday on a new cluster.
A Windows 2008 Guest suddenly was off, but I could not find any clue in
the logs on why it was off.
It's a VM which was migrated from a physical one in the w.e. It had been
working fine for the whole time.
I then thought it had
Il 17/01/2017 15:00, lists ha scritto:
> Hi,
>
> On 10-1-2017 12:56, Alexandre DERUMIER wrote:
>> maybe as workaround, create a small boot drive with grub as
>> bootloader, to boot the windows system ?
>
> Didn't work out. :-(
>
> My source machine has a raid1 dynamic disk configuration, and
>
Il 01/04/2016 11:20, Michael Rasmussen ha scritto:
I would try setting disk controler in proxmox to sata.
sata needs ahci driver.
set it to IDE which should be already included in the kernel.
Then change /dev/sda1 into /dev/hda1 in grub.
It will probably drop you into the rescue shell. Then
Il 30/03/2016 11:42, Edgardo Ghibaudo ha scritto:
I solved the previous problem.
Now the Linux guest (RHEL 4) in Proxmox environment starts, but after
a while the VM panics reporting the following message:
/mount: error 6 mounting ext3//
//mount: error 2 mounting none//
//switchroot:
Il 10/03/2016 11:11, Florent B ha scritto:
> Hi everyone,
>
> I think there's a little problem with ceph.conf permissions on Proxmox.
>
> With Infernalis release, all ceph processes are running under "ceph" user.
>
> root user starts processes, then changes user to ceph. All is fine.
>
> But
Hello all,
it would be nice to be able to assign a name (or a comment) to a backup.
This would simplify life when you save a VM into a certain state which
then can be restored later in another VM/CT.
I always have to lookup which VM ID it's using. And if the VM gets
removed I loose this
Hi all,
don't know if this has been rised before.
I have created a KVM with a qcow2 file (32G per default)
Now if I look at the file on filesystem size is reported as 32GB, but at
a closer look it's a sparse file so with a system inside using only 5GB,
it's acqually using 5GB on the server
Il 22/10/2014 18:45, Paul Gray ha scritto:
Your definition of sparse and my definition of cruft are colliding
here.
Sparse == hardly used filesystem.
cruft == non-zeroed, *unused* sectors on the disk
Your sparse filesystem likely has a lot of cruft. The two facets aren't
mutually exclusive.
Il 15/09/2014 17:34, Joerg Hanebuth ha scritto:
But whats that?
Ver. 3.2-4
Subscription is active
At a customers system i got that from apt-update:
W: Failed to fetchhttps://enterprise.proxmox.com/debian/dists/wheezy/Release
Unable to find expected entry
Il 15/09/2014 19:05, Joerg Hanebuth ha scritto:
Yes - dpkg --remove-architecture i386
but
dpkg: error: cannot remove architecture 'i386' currently in use by the database
but i guess uninstalling all i386 will mess up my system - i'm afraid;)
so I'll have to wait until I have the machine on my
Il 25/03/2014 08:40, Laurent Caron (Mobile) ha scritto:
Hi,
1/ you should fix your nfs host to be reliable
2/ other option is making backups on local disks and then pulling them.
No nfs,dependency then.
3/ if you can ssh into the box, you can reboot it. No need to go on site
I changed the
Il 21/01/2014 20:22, Tonči Stipičević ha scritto:
Hello to everybody,
till recently I hoped that I had 100% reliable solution for doing p2v
this way:
Il 21/01/2014 21:52, Alain Péan ha scritto:
Yes, I already migrated VMs from VMWare to Proxmox. The vmdk image file
format is equivalent to raw disk format in Proxmox. I think this is this
one that you should use. qcow2 is for dynamic disks growing when adding
more data, as are dynamic disks
Il 12/09/2013 09:33, lyt_y...@126.com ha scritto:
It's PERC H200I, FW Revision:7.15.08.00-IR
ok. I think the h200 does not have a BBU, so it can't be that one.
first check the firmware
I'd do some tests with some other distro which has a more updated kernel
(there has been activity on the
Il 11/09/2013 03:30, lyt_y...@126.com ha scritto:
hi,
This device configuration is Dell R510:
2TB SAS Disk x 12
64G Mem
Intel Xeon CPU E5620 x 2
6Gbps SAS Controller(MPT2BIOS-7.11.10.00(2011.06.02))
Recently,the kernel of the device is crashed,and occurs once every two days.
I have
27 matches
Mail list logo