thanks, applied, with cleanup followup

Am 04/05/2018 um 02:08 PM schrieb Wolfgang Link:
  pve-intro.adoc        |  3 +-
  pve-storage-cifs.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++
  pvesm.adoc            |  5 +++
  qm.adoc               |  2 +-
  vzdump.adoc           |  2 +-
  5 files changed, 108 insertions(+), 3 deletions(-)
  create mode 100644 pve-storage-cifs.adoc

diff --git a/pve-intro.adoc b/pve-intro.adoc
index 1188e77..f0b0d1e 100644
--- a/pve-intro.adoc
+++ b/pve-intro.adoc
@@ -106,6 +106,7 @@ We currently support the following Network storage types:
  * LVM Group (network backing with iSCSI targets)
  * iSCSI target
  * NFS Share
+* CIFS Share
  * Ceph RBD
  * Directly use iSCSI LUNs
  * GlusterFS
@@ -125,7 +126,7 @@ running Containers and KVM guests. It basically creates an 
archive of
  the VM or CT data which includes the VM/CT configuration files.
KVM live backup works for all storage types including VM images on
-NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
+NFS, CIFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
  optimized for storing VM backups fast and effective (sparse files, out
  of order data, minimized I/O).
diff --git a/pve-storage-cifs.adoc b/pve-storage-cifs.adoc
new file mode 100644
index 0000000..696d809
--- /dev/null
+++ b/pve-storage-cifs.adoc
@@ -0,0 +1,99 @@
+CIFS Backend
+:title: Storage: CIFS
+Storage pool type: `cifs`
+The CIFS backend is based on the directory backend, so it shares most
+properties. The directory layout and the file naming conventions are
+the same. The main advantage is that you can directly configure the
+CIFS server, so the backend can mount the share automatically in
+the hole cluster. There is no need to modify `/etc/fstab`. The backend
+can also test if the server is online, and provides a method to query
+the server for exported shares.
+The backend supports all common storage properties, except the shared
+flag, which is always set. Additionally, the following properties are
+used to configure the CIFS server:
+Server IP or DNS name. To avoid DNS lookup delays, it is usually
+preferable to use an IP address instead of a DNS name - unless you
+have a very reliable DNS server, or list the server in the local
+`/etc/hosts` file.
+CIFS share (as listed by `pvesm cifsscan`).
+Optional properties:
+If not presents, "guest" is used.
+The user password.
+It will be saved in a private directory (/etc/pve/priv/<STORAGE_ID>.cred).
+sets the domain (workgroup) of the user
+SMB protocol Version (default is `3`).
+SMB1 is not supported due to security issues.
+The local mount point (defaults to `/mnt/pve/<STORAGE_ID>/`).
+.Configuration Example (`/etc/pve/storage.cfg`)
+cifs: backup
+       path /mnt/pve/backup
+       server
+       share VMData
+       content backup
+       username anna
+       smbversion 3
+Storage Features
+CIFS does not support snapshots, but the backend uses `qcow2` features
+to implement snapshots and cloning.
+.Storage features for backend `nfs`
+|Content types                     |Image formats         |Shared |Snapshots 
+|images rootdir vztmpl iso backup  |raw qcow2 vmdk subvol |yes    |qcow2     
+You can get a list of exported CIFS shares with:
+ # pvesm cifsscan <server> [--username <username>] [--password]
+See Also
+* link:/wiki/Storage[Storage]
diff --git a/pvesm.adoc b/pvesm.adoc
index 62d190e..1d55d59 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -71,6 +71,7 @@ snapshots and clones.
  |ZFS (local)    |zfspool     |file  |no    |yes      |yes
  |Directory      |dir         |file  |no    |no^1^    |yes
  |NFS            |nfs         |file  |yes   |no^1^    |yes
+|CIFS           |cifs        |file  |yes   |no^1^    |yes
  |GlusterFS      |glusterfs   |file  |yes   |no^1^    |yes
  |LVM            |lvm         |block |no^2^ |no       |yes
  |LVM-thin       |lvmthin     |block |no    |yes      |yes
@@ -370,6 +371,8 @@ See Also
* link:/wiki/Storage:_NFS[Storage: NFS] +* link:/wiki/Storage:_CIFS[Storage: CIFS]
  * link:/wiki/Storage:_RBD[Storage: RBD]
* link:/wiki/Storage:_ZFS[Storage: ZFS]
@@ -386,6 +389,8 @@ include::pve-storage-dir.adoc[]
include::pve-storage-nfs.adoc[] +include::pve-storage-cifs.adoc[]
diff --git a/qm.adoc b/qm.adoc
index 154c5c1..5fba463 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -163,7 +163,7 @@ On each controller you attach a number of emulated hard 
disks, which are backed
  by a file or a block device residing in the configured storage. The choice of
  a storage type will determine the format of the hard disk image. Storages 
  present block devices (LVM, ZFS, Ceph) will require the *raw disk image 
-whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
+whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to 
  either the *raw disk image format* or the *QEMU image format*.
* the *QEMU image format* is a copy on write format which allows snapshots, and
diff --git a/vzdump.adoc b/vzdump.adoc
index 0461140..193e1cf 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -111,7 +111,7 @@ started (resumed) again. This results in minimal downtime, 
but needs
  additional space to hold the container copy.
  When the container is on a local file system and the target storage of
-the backup is an NFS server, you should set `--tmpdir` to reside on a
+the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
  local file system too, as this will result in a many fold performance
  improvement.  Use of a local `tmpdir` is also required if you want to
  backup a local container using ACLs in suspend mode if the backup

pve-devel mailing list

Reply via email to