I need to share the space of a new SAN device to some Proxmox nodes in a
cluster. The SAN only uses iSCSI.
What is the best way?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi
I live in Cuba. I have no way to buy a book on line. I would appreciate
if someone provided Mastering Proxmox Second Edition (Proxmox 4.x) book
for me.
Thanks in advance!
On 09/20/2016 02:23 AM, Emmanuel Kasper wrote:
Hi Eneko
Just noticed the repository info wiki doesn't give informat
s for restaurants.
On Tue, Sep 20, 2016 at 1:14 PM, Denis Morejon
wrote:
Hi
I live in Cuba. I have no way to buy a book on line. I would appreciate if
someone provided Mastering Proxmox Second Edition (Proxmox 4.x) book for me.
Thanks in advance!
On 09/20/2016 02:23 AM, Emmanuel Kasper wrote:
Hi
Is there any portable Proxmox wiki or a way to download it ?
I tried to download it using wget, but I couldn't. I need it offline.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
with look around it
2016-09-21 15:10 GMT-03:00 Denis Morejon :
Je, Je...No problem!
On 09/21/2016 07:31 AM, Robert Fantini wrote:
I'll trade one for some cigars. We purchased a few copies last week.
You can owe me the cigars.
on my next visit to the Dominican Republic I take a s
CTs are one of the best features of Proxmox for me. With such technology
it becomes a powerfull contestant.
I use now Proxmox 4.2.
First question:
I can't find an ideal storage for containers. A shared storage that
supports snapshot too (To migrate and to do live backups).
Second one:
On the
Hi:
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do ?
___
pve-user mailing list
pve-user@pve
El 07/08/18 a las 08:19, Mark Schouten escribió:
On Tue, 2018-08-07 at 08:13 -0400, Denis Morejon wrote:
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool
On Tue, 2018-08-07 at 08:13 -0400, Denis Morejon wrote:
I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1
(Raid 5 aquivalent) to introduce storage redundance. But no lvm is
present. I want to use lvm storage on top of zpool. What should I do
?
I'm curious about
El 07/08/18 a las 08:49, Mark Schouten escribió:
On Tue, 2018-08-07 at 09:42 -0400, Denis Morejon wrote:
So local-lvm is not active by default. Then, when you add this node
to
others with local-lvm storage active, And you try to migrate VMs
between
them, there are problems...
I don't
El 07/08/18 a las 09:57, Mark Schouten escribió:
On Tue, 2018-08-07 at 10:49 -0400, Denis Morejon wrote:
So, on 4 nodes of the cluster I am able to use local-lvm to put CTs
and
VMs over there, but I am not able to put VMs and CTs on local-lvm in
the
others.
That's why I want to create pv
El 07/08/18 a las 10:45, Mark Schouten escribió:
On Tue, 2018-08-07 at 11:30 -0400, Denis Morejon wrote:
These is not possible because I don't use zfs on the eight nodes ?
Just
in 4 of them and modifying /etc/pve/storage.cfg is a cluster wide
operation!
Ah yes, crap. That's right..
El 07/08/18 a las 17:51, Yannis Milios escribió:
(zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data)
Try the above as it was suggested to you ...
But I suspect I have
Why does Proxmox team have not incorporated a software Raid in the
install process ? So that we could include redundancy and lvm advantages
when using local disks.
El 08/08/18 a las 09:23, Denis Morejon escribió:
El 07/08/18 a las 17:51, Yannis Milios escribió:
(zfs create -V 100G
The 10 nodes lost the communication with each other. And they were
working fine for a month. They all have version 5.1.
All nodes have the same date/time and show a status like this:
root@proxmox11:~# pvecm status
Quorum information
--
Date: Fri Oct 12 11:55:59 201
I upgraded all the Proxmox (With debian repo and with proxmox repo) and
all find again!
Thank you.
El 15/10/18 a las 03:57, Thomas Lamprecht escribió:
On 10/12/18 6:57 PM, Denis Morejon wrote:
The 10 nodes lost the communication with each other. And they were working fine
for a month. They
quorrum.
Are there any tips (or steps) to fix it or to avoid it ?
El 15/10/18 a las 03:57, Thomas Lamprecht escribió:
On 10/12/18 6:57 PM, Denis Morejon wrote:
The 10 nodes lost the communication with each other. And they were working fine
for a month. They all have version 5.1.
any
something related to this proxmox version.
What to do ?
El 15/10/18 a las 12:46, Denis Morejon escribió:
Is multicast communication the main cause of cluster proxmox file
system problems ?
Why some times date and time have to be with cluster errors ?
Since my point of view cluster communication
Hi:
(1)
I don't know the idea behind keeping a VM from starting up when no
quorum. It has been maybe, since my point of view, the worst of managing
Proxmox cluster, because the stability of services (VM up and running)
had to be first (before the sync of information, for instance).
Is there
Could you give me an example please?
In practice, I know a lot of people that are afraid of building a
cluster because of the lost of quorum, an have a plain html web page
with the url of each node instead. And this is sad. This is like
assuming that the most important thing is to have the VMs
I have a 12 members cluster. I had problems with two nodes and It was
necessary to replace memory on them. But when I shut the servers down I
lost the cluster in the web interface. However, when I type "pvecm
status" on one of the working nodes, It appears to be ok (see below, all
nodes votes).
I note that sharing the db file, even using multicast protocol, could
put a limit to the maximum number of members. Any thinking about a
centralized db paradigm? How many members have you put together?
___
pve-user mailing list
pve-user@pve.proxmox.c
Thank you Ian !
I had (in corosync) some nodes by name and others by hostnames in
corosync. Due to adding to the cluster by IP and others by hostnames. I
edited the corosync.conf and put only IP addresses. For example:
node {
name: proxmox5
nodeid: 12
quorum_votes: 1
ring0_a
I have a 12 members cluster. I had problems with two nodes and It was
necessary to replace memory on them. But when I shut the servers down I
lost the cluster in the web interface, as I show in the image attached.
However, when I type "pvecm status" on one of the working nodes, It
appears to be
Hello:
I wanted to know some tips about why Proxmox teem decided to migrate to
lxc. Or why do you think openvz teem didn't find support for new linux
kernel. After all, I felt happy with openvz containers.
Thanks
___
pve-user mailing list
pve-user@p
25 matches
Mail list logo