---
src/PVE/INotify.pm | 64 +++-
test/etc_network_interfaces/t.create_network.pl | 66 +
2 files changed, 117 insertions(+), 13 deletions(-)
diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
index dbc9868..186df62
also check if mtu value is lower than parent interface
fixme: vxlan interface should be 50bytes lower than outgoing interface
we need to find which interface is used (unicast/multicast/frr)
---
src/PVE/INotify.pm | 22 ++
Changelog v3:
- add check for bond interfaces + tests
- add check for vlan interfaces + tests
- add checks for mtu size child vs parent + tests
unchanged:
Inotify : add vxlan interface support
Inotify : add bridge ports options
new patchs:
Inotify : add check_bond
Inotify : add check
---
src/PVE/INotify.pm | 47 ++-
test/etc_network_interfaces/t.create_network.pl | 51 +
2 files changed, 97 insertions(+), 1 deletion(-)
diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
index 0b9ea4a..dbc9868 100644
verify than parent interface exist
verify than parent interface type is eth,bond,bridge
verify than parent bridge is vlan aware if type bridge
---
src/PVE/INotify.pm | 19
test/etc_network_interfaces/t.create_network.pl | 30 -
verify than bond slaves exist && type is eth
---
src/PVE/INotify.pm | 15 ++
test/etc_network_interfaces/t.create_network.pl | 38 +
2 files changed, 53 insertions(+)
diff --git a/src/PVE/INotify.pm b/src/PVE/INotify.pm
index
On 7/4/18 1:28 PM, Dietmar Maurer wrote:
> Signed-off-by: Dietmar Maurer
> ---
> src/PVE/CLIFormatter.pm | 10 ++
> 1 file changed, 10 insertions(+)
>
> diff --git a/src/PVE/CLIFormatter.pm b/src/PVE/CLIFormatter.pm
> index 18e408b..eff6da5 100644
> --- a/src/PVE/CLIFormatter.pm
> +++
On 7/4/18 3:24 PM, Dietmar Maurer wrote:
> Also pass $options to renderer functions.
>
> Signed-off-by: Dietmar Maurer
> ---
> src/PVE/CLIFormatter.pm | 10 +-
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/src/PVE/CLIFormatter.pm b/src/PVE/CLIFormatter.pm
> index
On 7/4/18 12:43 PM, Alwin Antreich wrote:
> This patch series is an update and adds the Cephfs to our list of storages.
> You can mount the storage through the kernel or fuse client. The plugin for
> now allows all content formats, but this needs further testing.
>
> Config and keyfile locations
Also pass $options to renderer functions.
Signed-off-by: Dietmar Maurer
---
src/PVE/CLIFormatter.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/CLIFormatter.pm b/src/PVE/CLIFormatter.pm
index eff6da5..549ed31 100644
--- a/src/PVE/CLIFormatter.pm
+++
Signed-off-by: Dietmar Maurer
---
bin/pvesh | 606 +++---
1 file changed, 262 insertions(+), 344 deletions(-)
diff --git a/bin/pvesh b/bin/pvesh
index ff7b8482..d3ab9954 100755
--- a/bin/pvesh
+++ b/bin/pvesh
@@ -2,12 +2,10 @@
use
We have good command line completion and history with 'bash', so there is
no real need to duplicate this functionality.
Signed-off-by: Dietmar Maurer
---
bin/pvesh | 76 ++-
1 file changed, 2 insertions(+), 74 deletions(-)
diff --git
Signed-off-by: Alwin Antreich
---
PVE/VZDump.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index a0376ef9..7fc69f98 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -223,7 +223,7 @@ sub storage_info {
die "can't use storage type
This patch series is an update and adds the Cephfs to our list of storages.
You can mount the storage through the kernel or fuse client. The plugin for
now allows all content formats, but this needs further testing.
Config and keyfile locations are the same as in the RBD plugin.
Example entry:
- ability to mount through kernel and fuse client
- allow mount options
- get MONs from ceph config if not in storage.cfg
- allow the use of ceph config with fuse client
- Delete secret on cephfs storage creation
Signed-off-by: Alwin Antreich
---
PVE/API2/Storage/Config.pm | 8 +-
Signed-off-by: Alwin Antreich
---
PVE/Storage/CephTools.pm | 33 +
1 file changed, 33 insertions(+)
diff --git a/PVE/Storage/CephTools.pm b/PVE/Storage/CephTools.pm
index 7aa6069..3e2cede 100644
--- a/PVE/Storage/CephTools.pm
+++ b/PVE/Storage/CephTools.pm
@@
in the RBDPlugin, that is also shared by the CephFSPlugin
Signed-off-by: Alwin Antreich
---
PVE/Storage/RBDPlugin.pm | 22 ++
1 file changed, 2 insertions(+), 20 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 59f7941..be88ad7 100644
---
Add internal and external storage wizard for cephfs
Signed-off-by: Alwin Antreich
---
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 10 +++
www/manager6/storage/CephFSEdit.js | 59 ++
3 files changed, 70 insertions(+)
Some methods for connecting to a ceph cluster are the same for RBD and
CephFS, these are merged into the helper modules.
Signed-off-by: Alwin Antreich
---
PVE/Storage/CephTools.pm | 55
PVE/Storage/Makefile | 2 +-
PVE/Storage/RBDPlugin.pm |
pve-cluster is not a big project with to much dependencies, so
autotools was a bit of an overkill for it.
Omit it, plus a ./configure step in general and just use a plain
Makefile - in combination with pkg-config - like we do in our other
projects.
Build time gets reduced quite a bit - albeit the
On 7/4/18 10:52 AM, Dietmar Maurer wrote:
> We use this with 'pvesh'.
>
> Signed-off-by: Dietmar Maurer
> ---
> src/PVE/CLIHandler.pm | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/src/PVE/CLIHandler.pm b/src/PVE/CLIHandler.pm
> index 0b93a4b..6eab3c6 100644
> ---
On Wed, Jul 04, 2018 at 09:51:50AM +0200, Dietmar Maurer wrote:
> > > Seems not really wrong, IMO, and we use this for pve ceph, where the
> > > same backing storage can get added twice, with and without KRBD to our
> > > storage.cfg by our pool creator UI - one for CT which need KRBD and one
> >
We use this with 'pvesh'.
Signed-off-by: Dietmar Maurer
---
src/PVE/CLIHandler.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CLIHandler.pm b/src/PVE/CLIHandler.pm
index 0b93a4b..6eab3c6 100644
--- a/src/PVE/CLIHandler.pm
+++ b/src/PVE/CLIHandler.pm
@@ -616,7
> > Seems not really wrong, IMO, and we use this for pve ceph, where the
> > same backing storage can get added twice, with and without KRBD to our
> > storage.cfg by our pool creator UI - one for CT which need KRBD and one
> > for VMs which normally do not want it.
>
> IMHO this is totally wrong
> Seems not really wrong, IMO, and we use this for pve ceph, where the
> same backing storage can get added twice, with and without KRBD to our
> storage.cfg by our pool creator UI - one for CT which need KRBD and one
> for VMs which normally do not want it.
IMHO this is totally wrong and we
On 7/3/18 8:28 PM, Dietmar Maurer wrote:
> I always tell users that is is dangerous to add the same storage
> multiple times (using different names and options). This breaks very
> basic assumptions and locking will not work as expected.
>
Hmm, shouldn't it be totally valid to have two storage
26 matches
Mail list logo