On 05/07/2015 10:56 AM, Bart Lageweg | Bizway wrote: Hi,
I'm building a controlpanel for internal usage. Have working the
activeX java applet.
But can't get any info on the NoVNC console, anyone have a example?
Thanks
Hi
There is a script to start the NoVNC console outside the browser. You
Hi Richard
The other one is missing the firewall tab and has no options for spice.
Do you mean that in the Console menu, the SPICE option is greyed ?
Posting a screenshot of your problem of what is missing would maybe help.
Emmanuel
___
pve-user
Default
Standard VGA
VMWare compatible
Cirrus Logic GD5446
There is no spice.
I am a bit surprised. Are you sure pvesh get version has the same
output on your two nodes ?
Can you start a javascript console when you're logged and post the
output of
console.log(PVE.Utils.kvm_vga_drivers);
On 05/14/2015 11:04 AM, richard lucassen wrote:
On Wed, 13 May 2015 12:44:37 +0200
Emmanuel Kasper e.kas...@proxmox.com wrote:
Having lost too much time on this issue, I decided to apply a
Microsoft solution: do a reinstall.
Well if the pve-manager package was note in pristine form
On 05/16/2015 04:56 AM, Adam Thompson wrote:
On 2015-05-12 04:04 AM, Emmanuel Kasper wrote:
Hi
I found out the *qm terminal* command line switch was lacking a
comprehensive documentation so I wrote a wiki article about it.
https://pve.proxmox.com/wiki/Serial_Terminal
That's great! I
Hello
I will be attending the Annual Debian Conference, this year in
Heidelberg Germany [1] and will hopefully hold a talk about Vagrant,
and Proxmox (still in review stage) [2]
If you happen to assist to the Conference, we can get in contact before
so we can make an informal meeting, and have a
On 07/17/2015 06:40 PM, Gerald Brandt wrote:
On 2015-07-17 01:34 AM, Sten Aus wrote:
Hi
Is it possible to have little lock sign next to VM icon in the GUI
when VM is locked (for backup etc)?
Hi Gerald
A problem here might be that different storages have different
interpretation of what is
Hi
On 10/26/2015 02:25 PM, Dmitry Petuhov wrote:
> There's issue with NFS: if you try to send over it more than network can deal
> with (100-120 MBps for 1-gigabit), it imposes several-second pauses, which
> are
> being interpreted like hardware errors. These bursts may be just few seconds
>
On 10/21/2015 06:43 PM, hackru wrote:
> Hello
>
>
> I have a couple of questions
>
> i create containers like this:
>
> pct create $id /var/lib/vz/template/cache/$template \
> -net0 name=eth0,bridge=$bridge,ip=$ip,gw=$gw \
> -cpulimit $cpus \
> -memory $mem \
> -storage rbd \
> -mp0
On 11/10/2015 04:31 PM, Daniel Bayerdorffer wrote:
> Hi Emmanuel and Alain,
>
> Thank you for the help and advice. I'll try staggering the backups for now
> and see how it goes. I do like knowing I can limit the bandwidth as well.
>
> Emmanuel, do you know if vzdump.conf can be overwritten by
On 11/07/2015 10:28 PM, Michael Rasmussen wrote:
> On Sat, 7 Nov 2015 16:03:25 -0500 (EST)
> Daniel Bayerdorffer wrote:
>
>> Hi Alain,
>>
>> Thanks in advance for your help. At first I thought my storage was full as
>> well. So I deleted any existing backups and tried
On 10/13/2015 01:34 PM, hackru wrote:
> Hello
>
> Is it possible to mount(not bind mount) block device to lxc container? (pve4)
> What i'm trying to do - i need separate directory for /usr/ /var/ /opt/
> mounts when using containers.
yes, you can do quite a lot with container mountpoints
you
> I have set up the storage with 4x12Gbps SAS disks and a LSI 9300 HBA
> controller...
> locally on the storage i get with different random dd tests around
> 400MB/s read, which is quite okay (i dont have cache until now).
>
> The storage and proxmox host is direct attached with a 20Gbpe bond and
Hi Gilberto
On 09/10/2015 02:33 PM, Gilberto Nunes wrote:
> Well... AFAIK, DRBD9 storage only support RAW disc, and RAW disc do not
> support online resize disk, right? Only QCOW2 support it... Forgive me if I
> am wrong!
No, online resizing does not depend on QCOW2 vs RAW
You can refer to the
On 10/01/2015 10:51 AM, Gilou wrote:
> Le 01/10/2015 10:44, Alexandre DERUMIER a écrit :
>> hi,
>>
>> I think this bug has been fixed for proxmox 4.0
>>
>> https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=68b7e74b8d9817bc10558faf009956053ee1ea8b
>>
Hi
Last Windows updates introduced new SUSE virtio drivers instead of the
orevious Redhat based, and these drivers are broken !
We warned about this in the forum:
https://forum.proxmox.com/threads/do-not-use-suse-virtio-drivers-from-windows-update.25094/#post-125865
On 12/14/2015 12:48 AM,
On 01/04/2016 12:30 PM, Michael Pöllinger wrote:
> Oh sorry.
>
> I missed the links:
> https://forum.proxmox.com/threads/task-xxx-blocked-for-more-than-120-seconds.25167/
> There are numerous discussions about it atm.
>
Hi Michael !
task-xxx-blocked-for-more-than-120-seconds:
this message
On 01/04/2016 07:53 PM, Michael Pöllinger wrote:
> Hi Emmanuel.
>
> Wow this are good tips. we can check for. thank you!
>
> What we´ve started with is my thread in december.
> [So Dez 27 05:17:44 2015] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0
> action 0x6 frozen
> [So Dez 27 05:17:44
On 05/31/2016 12:34 PM, Marco Gaiarin wrote:
>
> I'm setting up a test Proxmox installation (one host, local storage).
> Latest version.
>
> All tests seems to go well, guests boot and do what expected.
>
> Now we are trying to use an RDX backup system, hitting some trouble.
>
> I'm following:
On 05/31/2016 01:01 PM, Marco Gaiarin wrote:
> Mandi! Emmanuel Kasper
> In chel di` si favelave...
>
>> It might be that qemu is trying to open /dev/sg3 with an uncorrect
>> open(2) flag. Can you try adding ,cache=writethrough to the scsi0 line
>> in the vm con
> virtio0: local-lvm:vm-100-disk-1,size=1T
> scsi0: /dev/sg3,cache=writethrough
Can you try if adding
scsihw: virtio-scsi-pci
at the end of the config file helps ?
The virtio scsi controller should allow passthrough of scsi tape devices
according to the Qemu wiki.
Hi Sebastian
I guess /dev/dm-8 your multipath device ? ( You can check that with
multipath -ll )
>From what I see from your syslog, you have a write error on the
multipath device dm-8 and at the same time to the underlying block
devices sdc and sdd. Especially if everything is working normally
On 06/22/2016 09:31 AM, Marco Gaiarin wrote:
> Mandi! Michael Rasmussen
> In chel di` si favelave...
>
>> Multipath must be manually configured via CLI.
>
> Seems rather a 'FAQ', i've hitted the same things a month ago, but i've
> had found that posts on the forum:
>
>
On 02/26/2016 08:16 PM, Michael Peele wrote:
> I'm working on building my personal SOHO cluster. I started with a Windows
> Home Server for everything, then found Proxmox.
> I've got a few old PCs and one newish one.
> I plan to run my primary storage with family photos, videos, and documents,
>
On 02/23/2016 05:35 PM, Mohamed Sadok Ben Jazia wrote:
> Hello list,
> I use proxmox pve API to create LXC containers with a chosen root password
> for ssh access and it's working well.
> I'm asking if it's possible to let the CT owner use it's private key
> instead of root password, and this with
On 01/26/2016 02:10 PM, Frank, Petric (Nokia - DE) wrote:
> Hello,
Hi
> I am actually running a proxmox-cluster build of 6 servers.
>
> When selecting hosts summary info tab sometimes a popup comes up for a
> second. It tells "Too many redirections (599)".
> After disappearing the contents is
On 02/15/2016 11:13 AM, ad...@extremeshok.com wrote:
> +1 also having the same issue.
This problem is being discussed at the momment in the forum
see especially
https://forum.proxmox.com/threads/lxc-backup-randomly-hangs-at-suspend.25345/page-4#post-130294
if you can provide debug information
On 03/09/2016 07:14 PM, Stefan Plattner wrote:
> Hello everyone!
>
> I used the "Move Disk" function in the "Hardware" tab of a
> stopped/offline Windows-VM. After the process was finished, I re-started
> the VM and the Guest greeted me with the following, in this cas
> SQLServer related,
On 03/31/2016 09:51 AM, Edgardo Ghibaudo wrote:
> The Linux guest (RHEL 4) in VMware 5 has the following configuration:
> - SCSI controller 0LSI Logical Parallel
> I tried in Proxmox to use all the possible disk emulation (VIRTIO, IDE,
> SATA, SCSI), but the result is always the same
On 03/31/2016 12:39 PM, Edgardo Ghibaudo wrote:
> Hi Emmanuel,
> In VMware5, the RHEL4 guest has no VMware guest extension.
> I tried different SCSI controllers (VMware PVSCSI, MegaRAID SAS
> 8708EM2), but the VM always crash (kernel panic) with the same message
>
> Thank you,
> Edgardo
>
Hi
On 05/12/2016 09:35 AM, Marco Gaiarin wrote:
>
> Sorry, again a doubt.
>
>
> The preferred method of using iSCSI is via ''LVM Groups with Network
> Backing'':
>
> http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
>
>
> but if i use multipath, i'm just using LVM
On 04/29/2016 11:54 AM, Frank Thommen wrote:
> Hi,
>
> I have updated one of our PVE systems from 4.1-1 to 4.1-33 in one go and
> now I am seeing an overhauled webUI (specifically the summary pages).
> While I like the zoomable graphs, I have several issues with the new UI:
>
> a) font and line
On 05/04/2016 03:34 PM, Denis Morejon wrote:
> I need to share the space of a new SAN device to some Proxmox nodes in a
> cluster. The SAN only uses iSCSI.
> What is the best way?
>
You can have a look at
http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
On 05/10/2016 01:48 PM, Lindsay Mathieson wrote:
> Could someone clarify what the default limit is? postings online imply
> that it is 10,000Kbps but I can't find a definitive answer.
>
> Thanks,
>
Hi Lindsay
There used to be a default limit in old release but it has been removed
since at least
On 07/01/2016 02:52 AM, Lindsay Mathieson wrote:
> On 1 July 2016 at 10:31, Lindsay Mathieson
> wrote:
>> Total brain fart here - looking at the src for pve-qemu-kvm at
>> https://git.proxmox.com/?p=pve-qemu-kvm.git;a=summary
>>
>> What on earth is the clone url? I
Hi Eneko,
> It seem's Proxmox will fail to start a VM of Windows 10/2016 type and
> more than 2 GB RAM?
> I recall something about this in the mailing list, and have changed the
> type to Windows 8 and it works, but I was surprised this is not yet
> fixed in non-subscription repo :)
>
Which
On 01/22/2017 03:19 PM, Leonardo Dourado wrote:
> Hi Everyone!
>
> I'm a new member on that community so on Proxmox user...
>
> Does anyone know any trick to import/export images from Proxmox to Hyper-v?
> Occurs that I have many images on my Hyper-v Server (also .wim) and I'd like
> to have
On 02/14/2017 04:24 PM, Lari Tanase wrote:
> Hi,
Hi Lari
> I've installed the 4.4. for some tests and learning activities on a
> spare machine I have found some issues
>
> 1. typo error in code
> https://github.com/proxmox/qemu-server/blob/master/PVE/API2/Qemu.pm line
> 2767
>> die "unable to
On 01/24/2017 03:22 PM, Elias wrote:
> Hi all,
>
> I created a snapshot of a virtual machine. The "Date/Status" field shows
> the correct date, while the timestamp shown in the "edit snapshot"
> window is incorrect. It shows "Sun Jan 18 1970 05:34:26 GMT+0100 (CET)".
> The time on Proxmox is set
On 10/06/2016 11:27 AM, Thomas Lamprecht wrote:
> Hi,
>
>
> On 10/06/2016 11:01 AM, Eneko Lacunza wrote:
>> Hi,
>>
>> About the "disable" thing, wouldn't it be much clearer and less prone
>> to confusion to rename it to "disable-ha-and-stop"? And I guess,
>> "enable-ha-and-start"?
>
> for me
Hi Eneko
> Just noticed the repository info wiki doesn't give information about apt
> keys:
> https://pve.proxmox.com/wiki/Package_Repositories
>
> The following should be noted somewhere:
>
> wget -O- "http://download.proxmox.com/debian/key.asc; | apt-key add -
Hi Haoyun
Have a look at
https://forum.proxmox.com/threads/pveproxy-become-blocked-state-and-cannot-be-killed.24386/page-2#post-149858
for some hints for this problem
On 10/19/2016 10:25 AM, haoyun wrote:
> and i restart pveproxy failed,command was blocked long time:
>
>
>
On 10/19/2016 11:22 AM, haoyun wrote:
> thanks
> I has look it url,it is not my question,now my pveproxy does not start.
install the lsof package
apt-get install lsof
what is the output of the command ?
lsof -i:8006
if this command does not produce output
try to restart pve-proxy
service
On 10/05/2016 02:11 PM, mj wrote:
> Hi,
>
> Just noticed something we find counterintuitive in proxmox:
>
> We are using HA for some of our machines.
>
> As we needed to work on an HA-managed machine, we disabled HA, so that
> we could manually halt/reboot/start it.
>
> But to our surprise,
>> I am non-subscriptions, and I just did an update yesterday to see if
>> it would fix the error. I'll be running a memtest today to see if I
>> can find anything.
>>
>> I hadn't done an update in awhile before that, so I'm leaning towards
>> a hardware issue. What do you think?
Yes, most
On 10/10/2016 04:29 PM, Adam Thompson wrote:
> The default PVE setup puts an XFS filesystem onto each "full disk" assigned
> to CEPH. CEPH does **not** write directly to raw devices, so the choice of
> filesystem is largely irrelevant.
> Granted, ZFS is a "heavier" filesystem than XFS, but it's
On 12/07/2016 05:03 PM, Gilberto Nunes wrote:
> Hi list
>
> I get this issue from an external USB hard disk:
>
> raps: atop[87646] trap divide error ip:40780a sp:7fff3074a928 error:0 in
> atop[40+26000]
> [8244487.641127] sd 988:0:0:0: [sdf] tag#0 FAILED Result: hostbyte=DID_OK
>
On 01/09/2017 07:04 PM, Vadim Bulst wrote:
> Hi Jeff,
>
> i totally agree! I also tried to deploy PVE via Puppet:
>
> class urzpvesrv (
>
> ) inherits urzpvesrv::params {
>
> package { 'systemd-sysv':
> ensure => 'installed',
> provider => 'apt',
> }
> package {
On 01/04/2017 12:27 PM, Vadim Bulst wrote:
> Dear list,
>
> I'd like to preseed / automate the PVE-installation with Foreman. I'm
> following the howto to install PVE on Debian Jessie (
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie ) . My
> question is about partition layout.
On 12/29/2016 11:09 AM, Emmanuel Kasper wrote:
>>
>> On 12/29/2016 10:57 AM, Florent B wrote:
>>> Hi everyone,
>>>
>>> Today I added a new node to my PVE cluster (every node up-to-date).
>>>
>>> When I connect to one of my 3 old nodes, t
>>>
>>> Emmanuel
>>>
>> Hi Emmanuel,
>>
>> host102 is an old node, which is not displaying the last one host107 :
>>
>> root@host102:~# pvecm nodes
>>
>> Membership information
>> --
>> Nodeid Votes Name
>> 3 1 10.109.0.101
>> 2 1
Hi Chance
On 01/04/2017 01:56 PM, Chance Ellis wrote:
> Hi Emmanuel,
>
> ZFS RAID 1 is supported for installation correct?
Yes ZFS raid levels are supported. It is from the Linux Software Raid
(dmraid) that we heard data corruption problems.
On 12/29/2016 10:57 AM, Florent B wrote:
> Hi everyone,
>
> Today I added a new node to my PVE cluster (every node up-to-date).
>
> When I connect to one of my 3 old nodes, the new node is not displayed
> (at all, not in the list) in "datacenter" list in GUI.
>
> If I connect to new node GUI,
On 12/20/2016 12:43 PM, Eneko Lacunza wrote:
> Hi,
>
> Sure, I meant datacenter, sorry; I didn't realize the acronym was in
> spanish :)
>
> So let me rewrite the question :-)
>
> We are doing a preliminary study for a VMWare installation migration to
> Proxmox (14 hosts total, 7 in each
On 12/16/2016 12:49 PM, Eneko Lacunza wrote:
> Hi all,
>
> We are doing a preliminary study for a VMWare installation migration to
> Proxmox.
>
> Currently, customer has 2 CPDs in HA, so that if the main CPD goes down,
> all VMs are restarted in backup CPD. Storage is SAN and storage data is
>
On 03/30/2017 06:45 AM, Rui Lopes wrote:
> I've updated my vagrant environment to work with this new version at:
>
> https://github.com/rgl/proxmox-ve/tree/pve-5
>
> This environment lets you try pve inside a VM (VirtualBox or KVM/libvirt)
> by running a single vagrant up command.
>
Thanks for
On 07/19/2017 09:59 PM, Gilberto Nunes wrote:
> Hi...
>
> One doubt: is it SCSI better than virtio??
>
> Thanks
Read The Fine Manual :)
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_hard_disk
___
pve-user mailing list
pve-user@pve.proxmox.com
On 07/19/2017 11:32 AM, Mikhail wrote:
> Hello,
>
> Thanks for your responses.
> The issue appears to be somewhere beyond iSCSI.
> I just tried to do some "dd" tests locally on the storage server and I'm
> getting very low write speeds:
do not use dd to benchmark storages, use fio
with a
Hi Devin
On 07/03/2017 04:49 AM, Devin Acosta wrote:
> *I am using the latest Proxmox Beta in a 2-node cluster right now (just
> doing some testing). From what I read it appears that I need to setup
> fencing, and possibly watchdog in order to finish making the cluster fully
> HA? I want the
On 05/16/2017 08:56 PM, Uwe Sauter wrote:
> Hi,
>
> I just noticed an (intentional?) inconsistency between the WebUI's Ceph OSD
> page vs. the tasks view on the bottom and
> the CLI:
>
> If you go to Datacenter -> node -> Ceph -> OSD and select one of the OSDs you
> can "remove" it with a
> Starting Nmap 6.47 ( http://nmap.org ) at 2017-06-20 13:42 BST
> Nmap scan report for blah
> Host is up (0.00022s latency).
> PORT STATE SERVICE
> 111/tcp open rpcbind
> 2049/tcp open nfs
>
> Nmap done: 1 IP address (1 host up) scanned in 0.59 seconds
>
>
>
> What am I missing?
>
On 06/20/2017 03:12 PM, Nux! wrote:
> Emmanuel,
>
> Thanks, that didn't work:
> rpc mount export: RPC: Timed out
>
> Could be because of the way Gluster does NFS. Any work around?
the online check for nfs uses
showmount --exports nfs_server
this should work too for a Gluster NFS storage
On 05/18/2017 02:56 PM, Uwe Sauter wrote:
> # mount -t nfs -o vers=4,rw,sync :$SHARE /mnt
> mount.nfs: mounting aurel:/proxmox-infra failed, reason given by server: No
> such file or directory
aurel:/proxmox-infra
are you using here the right path ?
looking at your exports file you should try
On 10/14/2017 10:52 AM, Stefan Fuhrmann wrote:
> Hello all,
>
> I want to update the system but get an error:
>
> The following packages have unmet dependencies:
> pve-firmware : Conflicts: firmware-linux-free but 3.4 is to be installed
> E: Broken packages
>
> firmware-linux-free is not
Hi Uwe
On 08/29/2017 01:44 PM, Uwe Sauter wrote:
> I thought so but when setting up new VMs where the disks don't have
> partitions yet, there is yet nothing that you could use
Disks too have ids, not only partitions or file systems.
If you use /dev/disk/by-id in your it should be stable
On 10/05/2017 06:23 PM, Michael Cooper wrote:
> Hello Everyone,
>
> New to the list here and on Promox 5.0, I have 16 TB of SAN Storage so
> I created myself an NFS Target and an iSCSI target and trie to add those to
> my Promox Servers however I get a storage "iSCSI Name" is not online. Is
On 10/18/2017 02:03 PM, Markos Vakondios wrote:
> Hello List,
>
> My use case, is to provide a large hardware-RAID backed volume (hence no
> ZFS) to PVE for continuous decent performance writes by a running container
> (cctv dvr).
>
> I have 6 x 6TB SATA drives on a hardware RAID 10
Hi Miguel
>> Something am I missing?
>>
>> On the other hand, a few more questions:
>>
>> - Should I move to RAW format? Pros? Cons? I have read that backups take
>> longer but performance boost is better.
choosing raw vs qcow2 is a performance vs features tradeoff
see the online pve reference
On 11/14/2017 09:41 AM, Eneko Lacunza wrote:
> Hi all,
>
> We have a Debian 9 VM running on a v3.4 Proxmox cluster (yes I know we
> should upgrade it), that periodically hangs.
>
> When it hangs, we can't reach it neither by network nor by Console (it
> is frezeed). Also, CPU is shown at about
>> Hi Eneko
>> What is the status of the qemu process when the VM hangs ? Is it in D
>> state ?
> I don't think but didn't check it. Will do next time it hangs.
>> Also which SCSI controller type are you using ?
> It was set as default (LSI 53C895A), just changed to virtio on this
> morning's
On 11/14/2017 12:45 PM, Ian Coetzee wrote:
> Hi Guys,
>
> Thank you for this awesome feature. My 2cents worth though
>
> Trying to import an OVF that was coverted from a XVA that was exported
> from a XenServer 6.5 fails gloriously. Also doesn't import into
> VirtualBox, so I am gonna blame
On 11/08/2017 11:20 PM, Francois Deslauriers wrote:
> is there any plan to support OVA, OVF import into Proxmox ?
> And if so when this could be expected ?
Technically we only support import from OVF.
But an OVA file is just a tarball with the OVF xml and a vmdk disk image.
On 12/11/2017 04:50 PM, Lindsay Mathieson wrote:
> Also I was unable to connect to the VM's on those nodes, not even via RDP
>
> On 12/12/2017 1:46 AM, Lindsay Mathieson wrote:
>>
>> I dist-upraded two nodes yesterday. Now both those nodes have multiple
>> unkilliable pveproxy processes. dmesg
73 matches
Mail list logo