Hi Sebastian,
pleas write in English, this is an English list.
normally it depends on your host system.
If you have only 1 socket the best for you is to use the cores and left the
socket on 1.
The socket is only interesting if you have an NUMA machine.
So to your question 3 Core would be the
Proxmox have currently no global view over clusters. you must manage all
cluster by them self.
Regrades
Wolfgang
Am 18.02.15 um 10:13 schrieb Andrew (Mac) McIver:
Is Proxmox VE the right tool for a star layout of multiple small
virtualization clusters, geographically distributed, but with
Freeze is working on wheezy-backpots, if the underlaying FS supports is!
On 02/20/2015 09:29 AM, Alexandre DERUMIER wrote:
What improvements does offer using a guest agent?
I think currently
guest-shutdown to shutdown the guest
and
guest-fsfreeze-freeze, to freeze the filesystem for
Hi Andrew,
if you have more than 16 nodes it can happen, that the ring packet from
corosync is to slow.
therefore you cluster think the nodes are offline.
Regards,
Wolfgang
On 05/12/2015 06:04 AM, Andrew Thrift wrote:
Hi,
I notice the wiki mentions a soft-limit of 16 nodes.
What happens
There where some issues in 0.6.4 but they are fixed in 0.6.4!
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4
On 06/30/2015 02:08 PM, Angel Docampo wrote:
Hi there!
Is it based on zfs send-receive? I thought it was buggy on linux...
perhaps it was on 0.6.3?
Anyway, that's a great
Yes, the maximum is 16 Virtio disks.
On 07/30/2015 10:25 PM, Keri Alleyne wrote:
Good day,
I'm monitoring this thread:
https://forum.proxmox.com/threads/9782-There-is-now-a-limit-of-virtio-devices-drives
Quote Originally Posted by dietmar
You can have 4 IDE disks, 14 SCSI disks, 16 VIRTIO
At the moment you have to use pct
At the moment supported storage with snapshot capability are ZFS
(ZFSPoolPlugin), RBD.
On 10/14/2015 09:47 PM, Jérémy Carnus wrote:
Hi,
With Pve 4.0 now as stable, I just would like to know which can of
tool / soft I can use if I want to make snapshot of
t; pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Best Regards,
Wolfgang Link
w.l...@proxmox.com
http://www.proxmox.com
Proxmox Server Solutions GmbH
Kohlgasse 51/10, 1050 Vienna, Austria
C
On 10/15/2015 07:15 PM, Thiago Damas wrote:
root@sm9:~# pct resize 102 rootfs +0
root@sm9:~# zfs get refquota volumes/subvol-102-disk-1
NAME PROPERTY VALUE SOURCE
volumes/subvol-102-disk-1 refquota none default
root@sm9:~# pct resize 102 rootfs +1
zfs error:
Yes it will be included to thew GUI soon.
On 10/15/2015 09:39 PM, Jérémy Carnus wrote:
Ok, but is it in your plan to add such kind of tool ?
Thanks
Le 15/10/2015 04:16, Wolfgang Link a écrit :
At the moment you have to use pct
At the moment supported storage with snapshot capability are ZFS
I made a note to the upgrade and 4.x Cluster wiki to clarify this
On 10/07/2015 12:30 PM, Angel Docampo wrote:
On 07/10/15 06:20, Alexandre DERUMIER wrote:
It's possible to upgrade all nodes to jessie/proxmox 4.0, at the same time,
without reboot host. (and without stop the vms)
A wiki
There are no official ceph pages fore Debian jessie, but AFAIK they come
soon.
On 07/07/2015 11:23 AM, Fabrizio Cuseo wrote:
Hello there.
I am trying a 3 host cluster with PVE 4.0beta with ceph server, but when I try
to install ceph (pveceph install -version hammer, or pveceph install
Hi,
here you get the needed information
https://pve.proxmox.com/wiki/Separate_Cluster_Network
On 01/08/2016 08:55 AM, Frederic Van Espen wrote:
Hi,
Is it possible to change the node IP addresses without breaking the
cluster for an extended amount of time? Are there any caveats?
I would
Hi,
has anybody a working PVE4 with Netapp NAS and NFS?
Because I would need a working export setting for PVE4.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Benchmarks LVM vs file.raw
Setup
Physical disk MX200 Crusial only used be test VM/CT
Debian 8 curent version
Extra Disk 32GB (QEMU no cache virtio bus)
QEMU on LVM
dd if=randfile of=/dev/vdb bs=4k
220662+0 records in
220662+0 records out
903831552 bytes (904 MB) copied, 2.58608 s, 349 MB/s
dd
Criu do not work proper with LXC Container what is needed for Live mig.
On 06/16/2016 03:08 PM, Daniel Eschner wrote:
> Very cool ;)
>
> I can wait a couple of days;)
>
> Live Migration also implemented as well?
>
>
>> Am 16.06.2016 um 15:06 schrieb Wolfga
Hi,
I have also a T320 with the H310 Controller and this give me a hard time
because the Dell firmware were not working properly, so I decide flash
the LSI firmware on this controller.
I use zfs with the IT mode of the Dell firmware before now the LSI
firmware and it works perfect.
On
You can make a clone on top the needed snapshot and use the clone in
your VM config instead the normal disk.
On 03/23/2016 06:34 AM, Lindsay Mathieson wrote:
> On 23 March 2016 at 15:24, Dietmar Maurer wrote:
>
>> This is a ZFS limitation - you can only rollback to latest
Hi,
here is a link where it is explain how to test multicast.
https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast
This two links explain how to setup a cluster and the network
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster
https://pve.proxmox.com/wiki/Separate_Cluster_Network
It is possible to kill the process with kill command?
On 04/25/2016 07:53 AM, Régis Houssin wrote:
> Hello
>
> I have a worry in recent weeks, a CT (LXC + ZFS) consumes 100% of CPU
> and impossible to stop,
> and could not connect on the CT (SSH or other).
> I have to reboot in hard, the host
Sorry forget what I wrote this is not relevant to your problem.
On 05/13/2016 09:30 AM, Albert Dengg wrote:
> On Fri, May 13, 2016 at 09:14:55AM +0200, Wolfgang Link wrote:
>> Hi Albert,
>>
>> Do you have already installed openv-switch 2.5?
>>
>> If yes make a dow
Hi
which version of virtio driver do you use and can you please provide the
VM config?
On 07/25/2016 01:35 PM, Tonči Stipičević wrote:
> Hello to all,
>
> after I migrated to the latest version (enterprise - repos), have tested
> live migration.
>
> So , vm-win7 cannot survive more than 2
Try to use as display type Spice.
On 07/26/2016 08:01 PM, Tonči Stipičević wrote:
> Hello Wolfgang,
>
> this is the vm config :
>
> agent: 1
> bootdisk: virtio0
> cores: 1
> ide2: rn102:iso/virtio-win-0.1.102.iso,media=cdrom,size=156988K
> memory: 2048
> name: w7test
> net0:
Hi,
yes should work
mode=balance-rr
> Lindsay Mathieson hat am 14. Juli 2016 um 09:42
> geschrieben:
>
>
> Is it possible to create a balance-rr bond with OVS?
>
> --
> Lindsay
> ___
> pve-user mailing list
>
Hi,
Yes you have to change it manual this is intended and not a bug.
If you have a cluster you need to change the cororsync.conf too.
On 02/24/2017 08:27 PM, Lari Tanase wrote:
> after some debug I found that the trouble is that in the
___
pve-user
This out put is generated by corosync.
But you are correct it is not possible to delete a IP (ring0_addr).
Please make a bugzilla entry.
https://bugzilla.proxmox.com/
On 08/30/2016 12:35 PM, Florent B wrote:
> Hi everyone,
>
> I configured my corosync.conf to use nodes IP address as
Proxmox VE use it own management stack and makes no use of libvirt, so
we can only manage other Proxmox VE hosts and there KVM and LXC containers.
If you like to use it you have to migrate all you kvm machines to
Proxmox VE, but this is no problem because only the config differs.
On 10/27/2016
You pool size is 3*400 GB so 1.2GB is correct, but your config say 3/2.
This means you have 3 copies of every pg(Placement Group) and min 2 pg
are needed to operate correct.
This means if you write 1GB you lose 3GB of free storage.
On 12/14/2016 12:14 PM, Daniel wrote:
> Hi there,
>
> i
Yes look like you miss jewel ;-)
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
On 03/23/2017 11:03 AM, Eneko Lacunza wrote:
> Hi Martin,
>
> El 22/03/17 a las 14:51, Martin Maurer escribió:
>> Hi all!
>>
>> We are proud to announce the release of the first beta of our Proxmox VE
>> 5.x
Hi,
You can't destroy datasets where snapshots exists.
zfs list -t all
will show you all datasets
and zfs destroy -R will erase all dataset what are referenced to tihs
given set.
On 03/09/2017 08:35 PM, Luis G. Coralle wrote:
> Hi.
> I was trying to sync two vm with pve-zsync between two nodes
Hi,
Proxmox VE is not a storage box, so we do not provide this kind of setups.
ZFS over iSCSI is used if you have a external storage box like FreeNas.
Debian Stretch use lio as iscsi target what should also work with IET.
On 07/19/2017 01:10 AM, Mikhail wrote:
Hello,
I'm trying to setup
Hi Tonci,
I guess it is the network traffic.
You should limited the replica speed or use a separate Network.
On 07/07/2017 07:35 AM, Tonči Stipičević wrote:
Hi to all,
I'm testing pvesr and it works correct so far and is big step ahead
regarding migration/replication w/o shared storage.
Hi Gilberto,
there is only the pvetest repo online yet.
But i thing you have the pve4 key what is a different key from the pve5
repo.
On 06/02/2017 05:50 PM, Gilberto Nunes wrote:
Hi
Last few days, I get this error when try apt update
Atingido:27 https://download.docker.com/linux/debian
It looks like a kernel problem.
Not sure what exactly is the problem, but every kernel(mainline and ubuntu)
larger 4.10 will trigger this behavior on some HW.
Debugging take some time.
I will report the result in this forum thread.
Hi Mehmet,
>Hello guys,
>
>is it possible to configure a proxmox Node to set a specific start-id for a vm
>and increment this id for successive vm's?
no it is not possible.
___
pve-user mailing list
pve-user@pve.proxmox.com
Hi Mark,
> - Is it possible to change the network that does the replication? (IE be
> good to use a direct connected with balance-rr for throughput)
You can change the replication network in the datacenter.conf option migration.
> - Is it possible to replicate between machines that are not in
> Mike O'Connor hat am 13. Februar 2018 um 01:56 geschrieben:
>
>
> Hi All
>
> Where can I find the source packages that the Proxmox Ceph Luminous was
> built from ?
>
>
> Mike
>
> ___
> pve-user mailing list
>
How proxmox 5 manage zfs size ? With quota ?
pve-zsync do no sync the hole pool.
Do you mean subvolumes for LXC?
>
> Thanks
>
> Jérémy Carnus
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bi
> Wolfgang Link <w.l...@proxmox.com> hat am 15. Februar 2018 um 14:49
> geschrieben:
>
>
> > Yes subvolume like rpool/data/subvol-100-disk1
> We do not replicate the file system property.
> Because you have to restore it anyway manual and so you can set the r
Hello,
> Does anyone know a decent way of logging pve-zsync status? For failure or how
> long it took to run the sync?
All jobs are executed by a cron job. A Proxmox VE host default setting is, if a
cron job generate output, send this output to the root email address.
But you can configure
Regards,
Wolfgang Link
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> So does this mean that all those processes are sitting in a "queue" waiting
> to execute? wouldn't it be more sensible for the script to terminate if a
> process is already running for the same job?
>
No because as I wrote 15 is default, but we have many user which have longer
intervals like
Hi Arjen,
thanks for this information.
We update the wiki with your information.
Best Regards,
Wolfgang Link
> arjenvanweel...@gmail.com hat am 8. März 2019 um 21:21 geschrieben:
>
>
> Apologies for replying to myself.
>
> On Fri, 2019-03-08 at 11:12 +0100, arjenvanweel.
43 matches
Mail list logo