On 05/20/2016 03:49 PM, Lindsay Mathieson wrote:
> On 20/05/2016 10:59 PM, Thomas Lamprecht wrote:
>> qm set VMID --args '-D /path/to/log'
>>
>> Should do the trick, have no erroneous qemu VM available at the moment
>> so couldn't check:)
>
> Thanks, but -D only outputs qemu debug info, not std
Hi Daniel,
which Proxmox version are you using? Did you do a reboot or just a restart of
the networking.service (PVE 4.1)? The
restart of the service doesn't always work. Please try with a reboot.
Cheers,
Alwin
On 05/20/2016 02:37 PM, Daniel Eschner wrote:
> Hi there,
>
> i still made a small
On 20/05/2016 10:59 PM, Thomas Lamprecht wrote:
qm set VMID --args '-D /path/to/log'
Should do the trick, have no erroneous qemu VM available at the moment
so couldn't check:)
Thanks, but -D only outputs qemu debug info, not std output
--
Lindsay Mathieson
Hi there,
i still made a small Proxmox Cluster with 2 bonding interfaces. Here is my
Config:
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode active-backup
auto
Is there a way to capture the std and err output from the kvm process
for a VM? some config option in the conf file?
--
Lindsay Mathieson
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
That fixed it. Thank you very much!
On Fri, May 20, 2016 at 7:32 AM, Thomas Lamprecht
wrote:
> Hi,
>
> Did you run pmxcfs manually in a local mode?
>
> ps aux | grep pmxcfs
>
> if yes stop it and restart the pve-cluster service with:
>
> systemctl restart pve-cluster
>
Hi,
Did you run pmxcfs manually in a local mode?
ps aux | grep pmxcfs
if yes stop it and restart the pve-cluster service with:
systemctl restart pve-cluster
if that fails ensure pmxcfs is stopped then remove the lockfile with:
rm /var/lib/pve-cluster/.pmxcfs.lockfile
and retry.
cheers,
I am attempting to create a new cluster. And I am having quite a bit of
trouble. If anyone could point me in the right direction I would be greatly
appreciative.
Thank you,
Ben
When I execute "pvecm create clustername" command I get the following error:
Job for pve-cluster.service failed. See
Hello Chris,
Severe errors might go to the console - if there is nothing, change syslog
configuration to log to it. Also try dmesg - I'm not sure, but it might be
buffered in RAM. Or move some files (kern.log?, syslog?) in rsyslogd config to
SSD.
Regards
Holger
-Ursprüngliche
Hi Chris,
is your root file system on the SSD or on the hard disks? 7% free space
may be not enough for wear leveling. If there is no extra space left,
the SSDs might switch to readonly mode...
Best,
Kai
Am 20.05.2016 09:15, schrieb Christopher Meyering:
> Hi Kai,
>
> Here the infos
Hi Kai,
Here the infos you've asked for:
Proxmox Version: pve-manager/4.1-22/aca130cf
Raid: all local in the hosts server with an LSI MegaRAID SAS 9271-4i
raid controller
Disks in Guest: 2 disks: (system disk): 400GB .raw - no cache - on sas
zfs - 12% filled; (db/solr disk): 120 GB .qcow2 - no
Hi Chris,
some more information would be helpful:
- Which version of Proxmox are you running?
- Are the RAIDs local or network attached (NAS)?
- What type of disk type do you use (virtio, raw), which disk format
(qcow2) do you use? Do you use caching?
- How large are your virtual disks,
Hi Brian,
no, the servers opperate in an datacenter of OVH France, so they are
build for 24/7 without any power savings enabled :/
Am 20.05.2016 um 08:53 schrieb Brian :::
Eneko probably onto something - any power management enabled?
On Fri, May 20, 2016 at 7:52 AM, Christopher Meyering
Eneko probably onto something - any power management enabled?
On Fri, May 20, 2016 at 7:52 AM, Christopher Meyering
wrote:
> Hi,
>
> the servers are "brand new" (oldest bought about 4 months ago).
> The following bios is used:
> Vendor: Intel Corporation
>
Hi,
the servers are "brand new" (oldest bought about 4 months ago).
The following bios is used:
Vendor: Intel Corporation
Version: SE5C610.86B.01.01.0009.060120151350
Release Date: 06/01/2015
the logs on the guest don't say anything (bad luck: they lay on the
partition which went ro), and in
Hi,
El 20/05/16 a las 08:19, Christopher Meyering escribió:
Hi folks,
Maybe someone of you has some idea how to get rid of my strange
"ro-filesytem" problem.
First of all some basics:
Proxmox runs on a potent host, powering the kvm-guests with 2 raids.
(1 ssd raid & 2 sas raid)
In our pve
16 matches
Mail list logo