http://git-wip-us.apache.org/repos/asf/cloudstack-docs-admin/blob/fff40fc1/source/locale/zh_CN/LC_MESSAGES/storage.po
----------------------------------------------------------------------
diff --git a/source/locale/zh_CN/LC_MESSAGES/storage.po
b/source/locale/zh_CN/LC_MESSAGES/storage.po
new file mode 100644
index 0000000..dfc19da
--- /dev/null
+++ b/source/locale/zh_CN/LC_MESSAGES/storage.po
@@ -0,0 +1,1461 @@
+# SOME DESCRIPTIVE TITLE.
+# Copyright (C)
+# This file is distributed under the same license as the Apache CloudStack
Administration Documentation package.
+#
+# Translators:
+# renoshen <shenkuan-...@sinosig.com>, 2014
+msgid ""
+msgstr ""
+"Project-Id-Version: Apache CloudStack Administration RTD\n"
+"Report-Msgid-Bugs-To: \n"
+"POT-Creation-Date: 2014-03-31 14:08-0400\n"
+"PO-Revision-Date: 2014-05-15 02:30+0000\n"
+"Last-Translator: renoshen <shenkuan-...@sinosig.com>\n"
+"Language-Team: Chinese (China)
(http://www.transifex.com/projects/p/apache-cloudstack-administration-rtd/language/zh_CN/)\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Language: zh_CN\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+# 6a202e4741994470b627a504dfaa0ec4
+#: ../../storage.rst:18
+msgid "Working with Storage"
+msgstr "使ç¨åå¨"
+
+# 162dc01704434bf39297420664b52155
+#: ../../storage.rst:21
+msgid "Storage Overview"
+msgstr "åå¨æ¦è¿°"
+
+# fb6cf279ed414191933888a05da50e30
+#: ../../storage.rst:23
+msgid ""
+"CloudStack defines two types of storage: primary and secondary. Primary "
+"storage can be accessed by either iSCSI or NFS. Additionally, direct "
+"attached storage may be used for primary storage. Secondary storage is "
+"always accessed using NFS."
+msgstr "CloudStackå®ä¹äºä¸¤ç§åå¨ï¼ä¸»åå¨åè¾
å©åå¨ã主åå¨å¯ä»¥ä½¿ç¨iSCSIæNFSåè®®ãå¦å¤ï¼ç´æ¥éå
åå¨å¯è¢«ç¨äºä¸»åå¨ãè¾
å©åå¨é常使ç¨NFSåè®®ã"
+
+# 3e1682b1378e44d1b6277de404cef6c9
+#: ../../storage.rst:28
+msgid ""
+"There is no ephemeral storage in CloudStack. All volumes on all nodes are "
+"persistent."
+msgstr
"CloudStackä¸æ¯æ临æ¶åå¨ãææèç¹ä¸çææå·é½æ¯æä¹
åå¨ã"
+
+# 3b158cf365224a128dce4c9bf414bc05
+#: ../../storage.rst:32
+msgid "Primary Storage"
+msgstr "主åå¨"
+
+# 0182a2b8079342b0b3ceb0b29b179646
+#: ../../storage.rst:34
+msgid ""
+"This section gives concepts and technical details about CloudStack primary "
+"storage. For information about how to install and configure primary storage "
+"through the CloudStack UI, see the Installation Guide."
+msgstr "æ¬ç« è讲述çæ¯å
³äºCloudStackç主åå¨æ¦å¿µåææ¯ç»èãæ´å¤å
³äºå¦ä½éè¿CloudStack UIå®è£
åé
置主åå¨çä¿¡æ¯ï¼è¯·åé
å®è£
å导ã"
+
+# 6050cb5647774b9191d87c9ac4c7db00
+#: ../../storage.rst:38
+msgid ""
+"`âAbout Primary Storageâ "
+"<http://docs.cloudstack.apache.org/en/latest/concepts.html#about-primary-"
+"storage>`_"
+msgstr "`âå
³äºä¸»åå¨â
<http://docs.cloudstack.apache.org/en/latest/concepts.html#å
³äºä¸»åå¨>`_"
+
+# 2d5f67decc454f279ad544d95f3a200c
+#: ../../storage.rst:41
+msgid "Best Practices for Primary Storage"
+msgstr "主åå¨çæä½³å®è·µ"
+
+# a8e6b6ee46a0407d89434124df4e5829
+#: ../../storage.rst:45
+msgid ""
+"The speed of primary storage will impact guest performance. If possible, "
+"choose smaller, higher RPM drives or SSDs for primary storage."
+msgstr
"主åå¨çé度ä¼ç´æ¥å½±åæ¥å®¾èæºçæ§è½ãå¦æå¯è½ï¼ä¸ºä¸»åå¨éæ©éæ©å®¹éå°ï¼è½¬éé«ç硬çæSSDsã"
+
+# 25a499cd8ec24c99bfc6f0391d203d7b
+#: ../../storage.rst:51
+msgid "There are two ways CloudStack can leverage primary storage:"
+msgstr "CloudStackç¨ä¸¤ç§æ¹å¼ä½¿ç¨ä¸»åå¨ï¼"
+
+# 8c344e1829c64653b8c2a3f7fd4340c6
+#: ../../storage.rst:53
+msgid ""
+"Static: This is CloudStack's traditional way of handling storage. In this "
+"model, a preallocated amount of storage (ex. a volume from a SAN) is given "
+"to CloudStack. CloudStack then permits many of its volumes to be created on "
+"this storage (can be root and/or data disks). If using this technique, "
+"ensure that nothing is stored on the storage. Adding the storage to "
+"CloudStack will destroy any existing data."
+msgstr "éæï¼CloudStack管çåå¨çä¼
ç»æ¹å¼ãå¨è¿ä¸ªæ¨¡å¼ä¸ï¼è¦ç»CloudStacké¢å
åé
å
个åå¨(æ¯å¦ä¸ä¸ªSANä¸çå·)ãç¶åCloudStackå¨ä¸é¢å建è¥å¹²ä¸ªå·(å¯ä»¥æ¯rootå/æè
æ°æ®ç)ãå¦æ使ç¨è¿ç§ææ¯ï¼ç¡®ä¿åå¨ä¸æ²¡ææ°æ®ãç»CloudStackæ·»å
åå¨ä¼éæ¯å·²åå¨çæææ°æ®ã"
+
+# 72b1dbe1653c4bfbb6de406140cd47d5
+#: ../../storage.rst:61
+msgid ""
+"Dynamic: This is a newer way for CloudStack to manage storage. In this "
+"model, a storage system (rather than a preallocated amount of storage) is "
+"given to CloudStack. CloudStack, working in concert with a storage plug-in, "
+"dynamically creates volumes on the storage system and each volume on the "
+"storage system maps to a single CloudStack volume. This is highly useful for"
+" features such as storage Quality of Service. Currently this feature is "
+"supported for data disks (Disk Offerings)."
+msgstr
"å¨æï¼è¿æ¯ä¸ä¸ªæ¯è¾æ°çCloudStack管çåå¨çæ¹å¼ãå¨è¿ä¸ªæ¨¡å¼ä¸ï¼ç»CloudStack使ç¨çæ¯ä¸ä¸ªåå¨ç³»ç»(ä½ä¸æ¯é¢åé
çåå¨)ãCloudStacké
ååå¨ä¸èµ·å·¥ä½ï¼å¨æçå¨åå¨ç³»ç»ä¸å建å·å¹¶ä¸åå¨ç³»ç»ä¸çæ¯ä¸ªå·é½æ
å°å°ä¸ä¸ªCloudStackå·ãè¿æ
·åé常æå©äºåå¨çQoSãç®åæ°æ®ç£ç(ç£çæ¹æ¡)æ¯æè¿ä¸ªç¹æ§ã"
+
+# 0a9a0545d65d499e9e87855b3f653046
+#: ../../storage.rst:71
+msgid "Runtime Behavior of Primary Storage"
+msgstr "主åå¨çè¿è¡æ¶è¡ä¸º"
+
+# c1b7c1df534f487d9b282f9cba881b25
+#: ../../storage.rst:73
+msgid ""
+"Root volumes are created automatically when a virtual machine is created. "
+"Root volumes are deleted when the VM is destroyed. Data volumes can be "
+"created and dynamically attached to VMs. Data volumes are not deleted when "
+"VMs are destroyed."
+msgstr
"å½å建èææºçæ¶åï¼rootå·ä¹ä¼èªå¨çå建ãå¨VM被éæ¯çæ¶årootå·ä¹ä¼è¢«å
é¤ãæ°æ®å·å¯ä»¥è¢«å建并å¨æçæè½½å°VMsä¸ãVMséæ¯æ¶å¹¶ä¸ä¼å
é¤æ°æ®å·ã"
+
+# b55c6963f6704ca899a39303709603e6
+#: ../../storage.rst:78
+msgid ""
+"Administrators should monitor the capacity of primary storage devices and "
+"add additional primary storage as needed. See the Advanced Installation "
+"Guide."
+msgstr "管çåå¯ä»¥çæ§ä¸»åå¨è®¾å¤ç容éåå¨éè¦æ¶æ·»å å
¶ä»ç主åå¨ã强åé
é«çº§å®è£
æ导ã"
+
+# 0e941196770047709eb7afb942ef3189
+#: ../../storage.rst:82
+msgid ""
+"Administrators add primary storage to the system by creating a CloudStack "
+"storage pool. Each storage pool is associated with a cluster or a zone."
+msgstr "管çåéè¿CloudStackå建åå¨æ± æ¥ç»ç³»ç»æ·»å
主åå¨ãæ¯ä¸ªåå¨æ± 对åºä¸ä¸ªç¾¤éæè
åºåã"
+
+# 0be8863a4d5c46b1bfa11b79c1a1b407
+#: ../../storage.rst:86
+msgid ""
+"With regards to data disks, when a user executes a Disk Offering to create a"
+" data disk, the information is initially written to the CloudStack database "
+"only. Upon the first request that the data disk be attached to a VM, "
+"CloudStack determines what storage to place the volume on and space is taken"
+" from that storage (either from preallocated storage or from a storage "
+"system (ex. a SAN), depending on how the primary storage was added to "
+"CloudStack)."
+msgstr
"对äºæ°æ®ç£çï¼å½ä¸ä¸ªç¨æ·æ§è¡ä¸ä¸ªç£çæ¹æ¡æ¥å建æ°æ®ç£ççæ¶åï¼åå§åä¿¡æ¯å°±è¢«åå°äºCloudStackçæ°æ®åºä¸ãæ
¹æ®ç¬¬ä¸æ¬¡ç»VMéå
æ°æ®ç£çç请æ±ï¼CloudStackå³å®è¿ä¸ªå·çä½ç½®å空é´å
ç¨å¨åªä¸ªåå¨(é¢åé
åå¨ååå¨ç³»ç»(æ¯å¦SAN)ä¸çä»»æä¸ç§ï¼è¿åå³äºCloudStack使ç¨çåªç§ä¸»åå¨)ã"
+
+# 35ce9d0c83744d748be1f1449258f36c
+#: ../../storage.rst:95
+msgid "Hypervisor Support for Primary Storage"
+msgstr "Hypervisoræ¯æç主åå¨"
+
+# 31019aa248e34fc6b8039761b2720e7f
+#: ../../storage.rst:97
+msgid ""
+"The following table shows storage options and parameters for different "
+"hypervisors."
+msgstr "ä¸é¢çè¡¨æ ¼å±ç¤ºäºä¸åhypervisorsææ¯æçåå¨ç±»åã"
+
+# 9b771149e496473c92a454c33198c403
+#: ../../storage.rst:101
+msgid "Storage media \\\\ hypervisor"
+msgstr "åå¨åªä» \\\\ hypervisor"
+
+# e97686c28c974dbf92162b6b0fa942b8
+# bfae9331e42b4e998bdae8c62e3765eb
+#: ../../storage.rst:101 ../../storage.rst:805
+msgid "VMware vSphere"
+msgstr "VMware vSphere"
+
+# 0058e135219d4613a2d541b1870917de
+# 3b343f7b559749a29a430c1a553b3526
+#: ../../storage.rst:101 ../../storage.rst:805
+msgid "Citrix XenServer"
+msgstr "Citrix XenServer"
+
+# 367bf4f20f1c4984924962ff00d6df89
+# 52a52afa13aa4117b06c90af23419b9d
+# edc2d21234584f6a99ca592df4540bbe
+#: ../../storage.rst:101 ../../storage.rst:354 ../../storage.rst:805
+msgid "KVM"
+msgstr "KVM"
+
+# b5679e3e7ee44ad081663d53a59df3e5
+#: ../../storage.rst:101
+msgid "Hyper-V"
+msgstr "Hyper-V"
+
+# 8c6b23cd30df4e11a24b05d287672f22
+#: ../../storage.rst:103
+msgid "**Format for Disks, Templates, and Snapshots**"
+msgstr "**ç£çã模æ¿åå¿«ç
§çæ ¼å¼**"
+
+# 1533db8909654e9dbdee1bfb501c22da
+#: ../../storage.rst:103
+msgid "VMDK"
+msgstr "VMDK"
+
+# aa7a7f57290745ebb004f74e8d93a94c
+# 7d5a6b20bd9c40dd8e2bf7f89defbb4f
+#: ../../storage.rst:103 ../../storage.rst:352
+msgid "VHD"
+msgstr "VHD"
+
+# 857238d5d0894c4195147b0552ef94cc
+# 85858348297d4cb18e06b4f68df7bb9b
+#: ../../storage.rst:103 ../../storage.rst:354
+msgid "QCOW2"
+msgstr "QCOW2"
+
+# 45f3ddfdaf174b0e844873ef2d5609aa
+#: ../../storage.rst:103
+msgid "VHD Snapshots are not supported."
+msgstr "ä¸æ¯æVHDå¿«ç
§ã"
+
+# 629dd8794f35435ba0c0de72442d0698
+#: ../../storage.rst:105
+msgid "**iSCSI support**"
+msgstr "**æ¯æiSCSI**"
+
+# df0857bbc10d454694cb27c6368cc385
+# 6cbf7a9a5ccf4c9b8d9cfea376fa936c
+#: ../../storage.rst:105 ../../storage.rst:106
+msgid "VMFS"
+msgstr "VMFS"
+
+# 6356c605eb10407597ce69ae946eded4
+#: ../../storage.rst:105
+msgid "Clustered LVM"
+msgstr "é群åçLVM"
+
+# 54f6f377a0e9433ba6646f2d344089a1
+# e8acf322132f4c1d92c944041a19c61f
+#: ../../storage.rst:105 ../../storage.rst:106
+msgid "Yes, via Shared Mountpoint"
+msgstr "æ¯çï¼éè¿Shared Mountpoint"
+
+# d6cf7d4bef014613aaf695ffffe4a235
+# c823d9f7038149b690668fa00379827b
+# 0b59cd2d03ac46529c20eaaab120981c
+# 846a3012c0ae422495176ac15ab15e55
+# d14fff6660d64688afc8ccbcc70cddfd
+# caa33ef9d4a04315a001850c938c31b8
+# 86461019f42a406e8b39a826bc193ed7
+# 5bb1f7b752e345d2b87e7e358be4b8a8
+# e68af722a2d2497aa6f8a8fca2cbdde6
+#: ../../storage.rst:105 ../../storage.rst:106 ../../storage.rst:107
+#: ../../storage.rst:109 ../../storage.rst:110 ../../storage.rst:110
+#: ../../storage.rst:110 ../../storage.rst:807 ../../storage.rst:807
+msgid "No"
+msgstr "ä¸æ¯æ"
+
+# 4a312a6b96eb419e9e8f0340e73043c7
+#: ../../storage.rst:106
+msgid "**Fiber Channel support**"
+msgstr "**FCæ¯æ**"
+
+# ca35f289a9684c7dbf0aa1d2eec7ea07
+#: ../../storage.rst:106
+msgid "Yes, via Existing SR"
+msgstr "æ¯çï¼éè¿å·²æçSR"
+
+# 50c9535f99f94ba991e8f54be3a35868
+#: ../../storage.rst:107
+msgid "**NFS support**"
+msgstr "**æ¯æNFS**"
+
+# e2c8e01326de420a93da3da1cccc6e04
+# 03b10788ed144cee89bfeb280a91507b
+# 36ec13d7d1104063898e7ee586dc8e73
+# 7b8ce691a5754a2bbe7b77ba4a5beb29
+# 73f105a11ea244348d276ceda77ffa48
+# 8e54d7de196946248fa2f7c5e5e89ed6
+# f03361eae16c40518b6e18d98ee28635
+# 34559af0e0774036a131a78b6b54e924
+# 89b6eb69d35a47eabb875bdf4405e421
+#: ../../storage.rst:107 ../../storage.rst:107 ../../storage.rst:107
+#: ../../storage.rst:108 ../../storage.rst:108 ../../storage.rst:108
+#: ../../storage.rst:108 ../../storage.rst:110 ../../storage.rst:807
+msgid "Yes"
+msgstr "æ¯æ"
+
+# 6d3ad498707b47e183038ce5ebb1447d
+#: ../../storage.rst:108
+msgid "**Local storage support**"
+msgstr "**æ¯ææ¬å°åå¨**"
+
+# 858891283fa8498382d9942dc59e2ae6
+#: ../../storage.rst:109
+msgid "**Storage over-provisioning**"
+msgstr "**åå¨è¶
é
**"
+
+# 876ffced6c3f4dd1825a99b39c624759
+#: ../../storage.rst:109
+msgid "NFS and iSCSI"
+msgstr "NFS and iSCSI"
+
+# 34edc60597864cb88c0d3a68161bd014
+# afaccbd5ae404bbf9a2b322b957fc27f
+#: ../../storage.rst:109 ../../storage.rst:109
+msgid "NFS"
+msgstr "NFS"
+
+# 5cdd4df99d8146b99052b26946cfdebc
+#: ../../storage.rst:110
+msgid "**SMB/CIFS**"
+msgstr "**SMB/CIFS**"
+
+# 23cfe36aac394c0183ffe7482f8dcded
+#: ../../storage.rst:113
+msgid ""
+"XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber "
+"Channel volumes and does not support over-provisioning in the hypervisor. "
+"The storage server itself, however, can support thin-provisioning. As a "
+"result the CloudStack can still support storage over-provisioning by running"
+" on thin-provisioned storage volumes."
+msgstr
"XenServeréè¿å¨iSCSIåFCå·ä¸ä½¿ç¨é群åçLVMç³»ç»æ¥åå¨VMéåï¼å¹¶ä¸ä¸æ¯æåå¨è¶
é
ã尽管åå¨æ¬èº«æ¯æèªå¨ç²¾ç®é
ç½®ãä¸è¿CloudStackä»ç¶æ¯æå¨æèªå¨ç²¾ç®é
ç½®çåå¨å·ä¸ä½¿ç¨åå¨è¶
é
ã"
+
+# bd46a0f32fc0459fadf89598bd0b9f7b
+#: ../../storage.rst:119
+msgid ""
+"KVM supports \"Shared Mountpoint\" storage. A shared mountpoint is a file "
+"system path local to each server in a given cluster. The path must be the "
+"same across all Hosts in the cluster, for example /mnt/primary1. This shared"
+" mountpoint is assumed to be a clustered filesystem such as OCFS2. In this "
+"case the CloudStack does not attempt to mount or unmount the storage as is "
+"done with NFS. The CloudStack requires that the administrator insure that "
+"the storage is available"
+msgstr "KVMæ¯æ \"Shared Mountpoint\"åå¨ãShared
Mountpointæ¯ç¾¤éä¸æ¯ä¸ªæå¡å¨æ¬å°æ件系ç»ä¸çä¸ä¸ªè·¯å¾ã群éä¸ææ主æºä¸çè¿ä¸ªè·¯å¾å¿
é¡»ä¸è´ï¼æ¯å¦/mnt/primary1ãå设Shared
Mountpointæ¯ä¸ä¸ªé群æ件系ç»å¦OCFS2ãå¨è¿ç§æ
åµä¸ï¼CloudStackä¸ä¼æå®å½åNFSåå¨å»å°è¯æè½½æå¸è½½ãCloudStackéè¦ç®¡çåä¿è¯åå¨æ¶å¯ç¨çã"
+
+# 5ab2d028f0aa4897b165fb8e42dbb0e4
+#: ../../storage.rst:127
+msgid ""
+"With NFS storage, CloudStack manages the overprovisioning. In this case the "
+"global configuration parameter storage.overprovisioning.factor controls the "
+"degree of overprovisioning. This is independent of hypervisor type."
+msgstr "å¨NFSåå¨ä¸ï¼CloudStackè´è´£è¶
é
ç½®ãè¿ç§æ
åµä¸ï¼å
¨å±é
ç½®åæ°storage.overprovisioning.factoræ¥æ§å¶è¶
é
çèå´ãè¿åå³äºhypervisoç±»åã"
+
+# 585fda9b393848eaa23c3131967805fe
+#: ../../storage.rst:132
+msgid ""
+"Local storage is an option for primary storage for vSphere, XenServer, and "
+"KVM. When the local disk option is enabled, a local disk storage pool is "
+"automatically created on each host. To use local storage for the System "
+"Virtual Machines (such as the Virtual Router), set "
+"system.vm.use.local.storage to true in global configuration."
+msgstr "å¨vSphere,
XenServeråKVMä¸ï¼æ¬å°åå¨æ¯ä¸ä¸ªå¯é项ãå½éæ©äºä½¿ç¨æ¬å°åå¨ï¼ææ主æºä¸ä¼èªå¨å建æ¬å°åå¨èµæºæ±
ãè¦è®©System Virtual Machines (å¦the Virtual
Router)使ç¨æ¬å°åå¨ï¼è¯·è®¾ç½®å
¨å±é
ç½®ä¸çsystem.vm.use.local.storage为true."
+
+# c4c43cee4d724ef7aa505614fe3cedba
+#: ../../storage.rst:138
+msgid ""
+"CloudStack supports multiple primary storage pools in a Cluster. For "
+"example, you could provision 2 NFS servers in primary storage. Or you could "
+"provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the "
+"first approaches capacity."
+msgstr "CloudStackæ¯æå¨ä¸ä¸ªç¾¤éå
æå¤ä¸ªä¸»åå¨æ±
ãæ¯å¦ï¼æ2个NFSæå¡å¨æä¾ä¸»åå¨ãæåæ¥æ1个iSCSI
LUNåæ¥åæ·»å äºç¬¬äºä¸ªiSCSI LUNã"
+
+# 235339683cb34f9984c233974b8da43d
+#: ../../storage.rst:144
+msgid "Storage Tags"
+msgstr "åå¨æ ç¾"
+
+# c9a9cdbd754642189859f3f96cfebe95
+#: ../../storage.rst:146
+msgid ""
+"Storage may be \"tagged\". A tag is a text string attribute associated with "
+"primary storage, a Disk Offering, or a Service Offering. Tags allow "
+"administrators to provide additional information about the storage. For "
+"example, that is a \"SSD\" or it is \"slow\". Tags are not interpreted by "
+"CloudStack. They are matched against tags placed on service and disk "
+"offerings. CloudStack requires all tags on service and disk offerings to "
+"exist on the primary storage before it allocates root or data disks on the "
+"primary storage. Service and disk offering tags are used to identify the "
+"requirements of the storage that those offerings have. For example, the high"
+" end service offering may require \"fast\" for its root disk volume."
+msgstr "åå¨æ¯å¯ä»¥è¢«\"æ ç¾\"çãæ
ç¾æ¯ä¸ä¸»åå¨ãç£çæ¹æ¡ææå¡æ¹æ¡å
³èçå符串å±æ§ãæ
ç¾å
许管çåç»åå¨æ·»å é¢å¤çä¿¡æ¯ãæ¯å¦\"SSD\"æè
\"æ
¢é\"ãCloudStackä¸è´è´£è§£éæ ç¾ãå®ä¸ä¼å¹é
æå¡åç£çæ¹æ¡çæ ç¾ãCloudStackè¦æ±å¨ä¸»åå¨ä¸åé
rootææ°æ®ç£çä¹åï¼æææå¡åç£çæ¹æ¡çé½å·²åå¨å¯¹åºçæ
ç¾ãæå¡åç£çæ¹æ¡çæ
ç¾è¢«ç¨äºè¯å«æ¹æ¡å¯¹åå¨çè¦æ±ãæ¯å¦ï¼é«ç«¯æå¡æ¹æ¡å¯è½éè¦å®çrootç£çå·æ¯\"å¿«éç\""
+
+# 62e869476add4ac8a88d34476a992a0b
+#: ../../storage.rst:158
+msgid ""
+"The interaction between tags, allocation, and volume copying across clusters"
+" and pods can be complex. To simplify the situation, use the same set of "
+"tags on the primary storage for all clusters in a pod. Even if different "
+"devices are used to present those tags, the set of exposed tags can be the "
+"same."
+msgstr "æ ç¾ï¼åé
ï¼è·¨é群ææºæ¶çå·å¤å¶ä¹é´çå
³ç³»æ¯å¾å¤æçãç®åçç¯å¢å°±æ¯å¨ä¸ä¸ªæºæ¶å
ææé群ç主åå¨ä½¿ç¨ç¸åçæ ç¾ãå³ä½¿ç¨è¿äºæ
ç¾è¡¨ç¤ºä¸å设å¤ï¼å±ç°åºæ¥çæ ç¾ç»ä»å¯ä»¥æ¯ä¸æ ·çã"
+
+# 85d0b9514cd94531b80bb8458c5ce0dc
+#: ../../storage.rst:165
+msgid "Maintenance Mode for Primary Storage"
+msgstr "主åå¨çç»´æ¤æ¨¡å¼"
+
+# 51dcf6eb4eec44809e6e374dbf77c6bf
+#: ../../storage.rst:167
+msgid ""
+"Primary storage may be placed into maintenance mode. This is useful, for "
+"example, to replace faulty RAM in a storage device. Maintenance mode for a "
+"storage device will first stop any new guests from being provisioned on the "
+"storage device. Then it will stop all guests that have any volume on that "
+"storage device. When all such guests are stopped the storage device is in "
+"maintenance mode and may be shut down. When the storage device is online "
+"again you may cancel maintenance mode for the device. The CloudStack will "
+"bring the device back online and attempt to start all guests that were "
+"running at the time of the entry into maintenance mode."
+msgstr
"主åå¨å¯ä»¥è¢«è®¾ç½®æç»´æ¤æ¨¡å¼ãè¿å¾æç¨ï¼ä¾å¦ï¼æ¿æ¢åå¨è®¾å¤ä¸åçRAMã对åå¨è®¾å¤çç»´æ¤æ¨¡å¼å°é¦å
åæ¢ä»»ä½æ°çæ¥èªé¢å¤ççæ¥å®¾èæºï¼ç¶ååæ¢ææææ°æ®å·çæ¥å®¾èæºãå½æææ¥å®¾èæºè¢«åæ¢çæ¶åï¼è¿ä¸ªåå¨è®¾å¤å°±è¿å
¥ç»´æ¤æ¨¡å¼äºå¹¶ä¸å¯ä»¥å
³æºãå½åå¨è®¾å¤å次ä¸çº¿çæ¶åï¼ä½
å¯ä»¥å¯¹è¿ä¸ªè®¾å¤åæ¶ç»´æ¤æ¨¡å¼ãCloudStackå°è¿åå¨çº¿ç¶æ并ä¸è¯çå¯å¨æææ¾å¨è¿ä¸ªè®¾å¤è¿å
¥ç»´æ¤æ¨¡å¼åè¿è¡çæ¥å®¾æºå¨ã"
+
+# 50c6445f7bbf459c8184fcbee613ef51
+#: ../../storage.rst:179
+msgid "Secondary Storage"
+msgstr "è¾
å©åå¨"
+
+# 93e8dd0aa2384174bfaff80a3205b6cb
+#: ../../storage.rst:181
+msgid ""
+"This section gives concepts and technical details about CloudStack secondary"
+" storage. For information about how to install and configure secondary "
+"storage through the CloudStack UI, see the Advanced Installation Guide."
+msgstr "æ¬ç« è讲述çæ¯å
³äºCloudStackçè¾
å©åå¨æ¦å¿µåææ¯ç»èãæ´å¤å
³äºå¦ä½éè¿CloudStack UIå®è£
åé
置主åå¨çä¿¡æ¯ï¼è¯·åé
é«çº§å®è£
å导ã"
+
+# 18793cf034ab43f1b77b289ac9d4286b
+#: ../../storage.rst:186
+msgid ""
+"`âAbout Secondary Storageâ "
+"<http://docs.cloudstack.apache.org/en/latest/concepts.html#about-secondary-"
+"storage>`_"
+msgstr "`âå
³äºè¾
å©åå¨â
<http://docs.cloudstack.apache.org/en/latest/concepts.html#about-secondary-storage>`_ã"
+
+# 581e0ba4319a47feb92812e319ae14af
+#: ../../storage.rst:189
+msgid "Working With Volumes"
+msgstr "使ç¨ç£çå·"
+
+# 142cd329b56e4fb2b0e17fb5fe687958
+#: ../../storage.rst:191
+msgid ""
+"A volume provides storage to a guest VM. The volume can provide for a root "
+"disk or an additional data disk. CloudStack supports additional volumes for "
+"guest VMs."
+msgstr "å·ä¸ºæ¥å®¾èæºæä¾åå¨ãå·å¯ä»¥ä½ä¸ºrootååºæéå
æ°æ®ç£çãCloudStackæ¯æ为æ¥å®¾èæºæ·»å å·ã"
+
+# cdaaac178c9946e9bd5030fe82be6c78
+#: ../../storage.rst:195
+msgid ""
+"Volumes are created for a specific hypervisor type. A volume that has been "
+"attached to guest using one hypervisor type (e.g, XenServer) may not be "
+"attached to a guest that is using another hypervisor type, for "
+"example:vSphere, KVM. This is because the different hypervisors use "
+"different disk image formats."
+msgstr
"ä¸åçhypervisorå建çç£çå·ææä¸åãå½ç£çå·è¢«éå
å°ä¸ç§hypervisorçèææº(å¦ï¼xenserver)ï¼å°±ä¸è½å被éå å°å
¶ä»ç±»åçhypervisor,å¦ï¼vmwareãkvmçèææºä¸ãå
为å®ä»¬æç¨çç£çååºæ¨¡å¼ä¸åã"
+
+# 91341fe4ee6540a9a6ee240d99134524
+#: ../../storage.rst:201
+msgid ""
+"CloudStack defines a volume as a unit of storage available to a guest VM. "
+"Volumes are either root disks or data disks. The root disk has \"/\" in the "
+"file system and is usually the boot device. Data disks provide for "
+"additional storage, for example: \"/opt\" or \"D:\". Every guest VM has a "
+"root disk, and VMs can also optionally have a data disk. End users can mount"
+" multiple data disks to guest VMs. Users choose data disks from the disk "
+"offerings created by administrators. The user can create a template from a "
+"volume as well; this is the standard procedure for private template "
+"creation. Volumes are hypervisor-specific: a volume from one hypervisor type"
+" may not be used on a guest of another hypervisor type."
+msgstr
"CloudStackå®ä¹ä¸ä¸ªå·ä½ä¸ºæ¥å®¾èæºçä¸ä¸ªææçåå¨åå
ãå·å¯è½æ¯rootç£çæè
æ°æ®ç£çãrootç£çå¨æ件系ç»ä¸æ
\"/\"
并ä¸é常ç¨äºå¯å¨è®¾å¤ãæ°æ®ç£çæä¾é¢å¤çåå¨ï¼æ¯å¦ï¼\"/opt\"æè
\"D:\"ãæ¯ä¸ªæ¥å®¾VMé½æä¸ä¸ªrootç£çï¼VMså¯è½ä¹è¿ææ°æ®ç£çãç»ç«¯ç¨å¯ä»¥ç»æ¥å®¾VMsæå¨å¤ä¸ªæ°æ®ç£çãç¨æ·éè¿ç®¡çåå建çç£çæ¹æ¡æ¥éæ©æ°æ®ç£çãç¨æ·åæ
·å¯ä»¥å¨å·ä¸å建模æ¿ï¼è¿æ¯æ
åç§æ模æ¿çå建æµç¨ãé对ä¸åçhypervisorå·ä¹ä¸åï¼ä¸ä¸ªhypervisorç±»åä¸çå·ä¸è½ç¨äºå
¶å®çhypervisorç±»åä¸çæ¥å®¾èæºã"
+
+# da3a2cb646de40cba343e682fd29a500
+#: ../../storage.rst:213
+msgid ""
+"CloudStack supports attaching up to 13 data disks to a VM on XenServer "
+"hypervisor versions 6.0 and above. For the VMs on other hypervisor types, "
+"the data disk limit is 6."
+msgstr "CloudStackæ¯æç»XenServer 6.0å以ä¸çæ¬çVMæå¤éå
13个æ°æ®ç£çãå
¶å®hypervisorç±»åä¸çVMsï¼æå¤éå
6个æ°æ®ç£çã"
+
+# f994b230e52240c089e8967a20e17040
+#: ../../storage.rst:216
+msgid "Creating a New Volume"
+msgstr "å建æ°å·"
+
+# 73e8fe50f38c4b7ab55f0fc3569b141f
+#: ../../storage.rst:218
+msgid ""
+"You can add more data disk volumes to a guest VM at any time, up to the "
+"limits of your storage capacity. Both CloudStack administrators and users "
+"can add volumes to VM instances. When you create a new volume, it is stored "
+"as an entity in CloudStack, but the actual storage resources are not "
+"allocated on the physical storage device until you attach the volume. This "
+"optimization allows the CloudStack to provision the volume nearest to the "
+"guest that will use it when the first attachment is made."
+msgstr "ä½ å¯ä»¥å¨ç¬¦åä½ åå¨è½åçæ
åµä¸éæ¶åæ¥å®¾èææºæ·»å
å¤ä¸ªæ°æ®å·ãCloudStackç管çååæ®éç¨æ·é½å¯ä»¥åèææºå®ä¾ä¸æ·»å
å·ãå½ä½
å建äºä¸ä¸ªæ°å·ï¼ä»ä»¥ä¸ä¸ªå®ä½çå½¢å¼åå¨äºCloudStackä¸ï¼ä½æ¯å¨ä½
å°å
¶éå å°å®ä¾ä¸ä¹åä»å¹¶ä¸ä¼è¢«åé
å®é
çç©ç空é´ãè¿ä¸ªä¼å项å
许CloudStackæä¾ææ¥è¿æ¥å®¾èæºçå·ï¼å¹¶å¨ç¬¬ä¸ä¸ªéå
è³èæºçæ¶å使ç¨å®ã"
+
+# 4e95f1cce44c4810b4dbd2d9ad64b99c
+#: ../../storage.rst:227
+msgid "Using Local Storage for Data Volumes"
+msgstr "使ç¨æ¬å°åå¨ä½ä¸ºæ°æ®å·"
+
+# 77c86c26103b40de80b813b21c07de64
+#: ../../storage.rst:229
+msgid ""
+"You can create data volumes on local storage (supported with XenServer, KVM,"
+" and VMware). The data volume is placed on the same host as the VM instance "
+"that is attached to the data volume. These local data volumes can be "
+"attached to virtual machines, detached, re-attached, and deleted just as "
+"with the other types of data volume."
+msgstr
"æ¨å¯ä»¥å°æ°æ®çå建å¨æ¬å°åå¨ä¸(XenServerãKVMåVMwareæ¯æ)ãæ°æ®çä¼åæ¾å¨åææè½½çèæºç¸åç主æºä¸ãè¿äºæ¬å°æ°æ®çå¯ä»¥è±¡å
¶å®ç±»åçæ°æ®çä¸æ ·æè½½å°èæºãå¸è½½ãåæè½½åå é¤ã"
+
+# 4e4085b4548c4fd291c04570d4b72c31
+#: ../../storage.rst:235
+msgid ""
+"Local storage is ideal for scenarios where persistence of data volumes and "
+"HA is not required. Some of the benefits include reduced disk I/O latency "
+"and cost reduction from using inexpensive local disks."
+msgstr "å¨ä¸éè¦æä¹
åæ°æ®å·åHAçæ
åµä¸ï¼æ¬å°åå¨æ¯ä¸ªçæ³çéæ©ãå
¶ä¼ç¹å
æ¬éä½ç£çI/O延è¿ã使ç¨å»ä»·çæ¬å°ç£çæ¥éä½è´¹ç¨çã"
+
+# fb5fadabf5084ba2be5f47f42c575bec
+#: ../../storage.rst:239
+msgid ""
+"In order for local volumes to be used, the feature must be enabled for the "
+"zone."
+msgstr "为äºè½ä½¿ç¨æ¬å°ç£çï¼åºåä¸å¿
é¡»å¯ç¨è¯¥åè½ã"
+
+# 2738dcf7ec914826a7fee9dde6d1dd3d
+#: ../../storage.rst:242
+msgid ""
+"You can create a data disk offering for local storage. When a user creates a"
+" new VM, they can select this disk offering in order to cause the data disk "
+"volume to be placed in local storage."
+msgstr
"æ¨å¯ä»¥ä¸ºæ¬å°åå¨å建ä¸ä¸ªæ°æ®çæ¹æ¡ãå½å建æ°èæºæ¶ï¼ç¨æ·å°±è½å¤éæ©è¯¥ç£çæ¹æ¡ä½¿æ°æ®çåæ¾å°æ¬å°åå¨ä¸ã"
+
+# e6cc30d22423450c85f9517f7c29b58c
+#: ../../storage.rst:246
+msgid ""
+"You can not migrate a VM that has a volume in local storage to a different "
+"host, nor migrate the volume itself away to a different host. If you want to"
+" put a host into maintenance mode, you must first stop any VMs with local "
+"data volumes on that host."
+msgstr "ä½
ä¸è½å°ä½¿ç¨äºæ¬å°åå¨ä½ä¸ºç£ççèæºè¿ç§»å°å«ç主æºï¼ä¹ä¸è½è¿ç§»ç£çæ¬èº«å°å«ç主æºãè¥è¦å°ä¸»æºç½®äºç»´æ¤æ¨¡å¼ï¼æ¨å¿
é¡»å
å°è¯¥ä¸»æºä¸æææ¥ææ¬å°æ°æ®å·çèæºå
³æºã"
+
+# 5ba5e0d99d2b485eb47568e8f5c49974
+#: ../../storage.rst:252
+msgid "To Create a New Volume"
+msgstr "å建æ°å·"
+
+# 854156bcaf7841df87abdbbe3c2e1a97
+# 81089de40b7f4cfabaf992d654b45b6a
+# d2aadcf9b28745ccabf9ab9238f12bb8
+# 87447c411f444603b00a3bdf59da65d0
+# 6f7be70585d8470fb8e8e0f82a57714b
+# c9aa8bf9f15c4956ab9d0372030dc6ee
+# 88f2f68cca994709945a84a66ff08f83
+#: ../../storage.rst:256 ../../storage.rst:386 ../../storage.rst:428
+#: ../../storage.rst:498 ../../storage.rst:529 ../../storage.rst:566
+#: ../../storage.rst:636
+msgid "Log in to the CloudStack UI as a user or admin."
+msgstr "使ç¨ç¨æ·æ管çåç»å½å°CloudStackç¨æ·çé¢ã"
+
+# 72e414f9ba5a469aa0219d560888934b
+# 14f9e0c99a7a49f6ac1f53e3163d2025
+# 50e1473900704ebcae177aa3a73ed45c
+# ce7e768615b345d1a5cfd791b3e49343
+#: ../../storage.rst:260 ../../storage.rst:324 ../../storage.rst:640
+#: ../../storage.rst:756
+msgid "In the left navigation bar, click Storage."
+msgstr "å¨å·¦ä¾§å¯¼èªæ ç¹å»åå¨ã"
+
+# 5e66820dfd5e495fb59a2e3b04eceaf4
+# 7490a7779492470692842a5bc16f66a8
+# 05e2753aad0543ed9de8f5fe61ae80d8
+#: ../../storage.rst:264 ../../storage.rst:394 ../../storage.rst:644
+msgid "In Select View, choose Volumes."
+msgstr "å¨éæ©è§å¾ä¸éæ©å·ã"
+
+# 98f953c385004db884f1a0b9d7fcce09
+#: ../../storage.rst:268
+msgid ""
+"To create a new volume, click Add Volume, provide the following details, and"
+" click OK."
+msgstr "ç¹å»æ·»å
å·æ¥å建ä¸ä¸ªæ°å·ï¼å¡«å以ä¸ä¿¡æ¯åç¹å»ç¡®å®ã"
+
+# f77218cc8e0944c5ac93425caddc5a5c
+#: ../../storage.rst:273
+msgid "Name. Give the volume a unique name so you can find it later."
+msgstr "ååãç»å·å个å¯ä¸çåå以便äºä½ 以åæ¾å°å®ã"
+
+# 77f2d58bc029432fad2e20894e0dc4a4
+#: ../../storage.rst:277
+msgid ""
+"Availability Zone. Where do you want the storage to reside? This should be "
+"close to the VM that will use the volume."
+msgstr "å¯ç¨çèµæºåãä½
æ³è®©è¿ä¸ªåå¨å¨åªä¸ªå°æ¹ææï¼è¿ä¸ªåºè¯¥æ¥è¿è¦æ¯ç¨è¿ä¸ªå·çVMãï¼å°±æ¯è¯´ä½
è¦ å¨å个èµæºåå
使ç¨è¿ä¸ªåå¨å°±éæ©å个èµæºåï¼å¦ææ¤åå¨è¦å¨å¤ä¸ªèµæºä¸å
å
±äº«ä½ å°±éææèµæºåï¼"
+
+# 0dd389820dab437ab448b331b2e2e3dc
+#: ../../storage.rst:282
+msgid "Disk Offering. Choose the characteristics of the storage."
+msgstr "ç£çæ¹æ¡ãéæ©åå¨ç¹æ§ã"
+
+# 3ff901f9fc4245859f4413152d691ec2
+#: ../../storage.rst:284
+msgid ""
+"The new volume appears in the list of volumes with the state âAllocated.â
"
+"The volume data is stored in CloudStack, but the volume is not yet ready for"
+" use"
+msgstr "æ°å»ºçåå¨ä¼å¨å·å表ä¸æ¾ç¤ºä¸ºâå·²åé
âç¶æãå·æ°æ®å·²ç»åå¨å°CloudStackäºï¼ä½æ¯è¯¥å·è¿ä¸è½è¢«ä½¿ç¨ã"
+
+# 1fd0848ef74948fdbcbccf818d92177d
+#: ../../storage.rst:290
+msgid "To start using the volume, continue to Attaching a Volume"
+msgstr "éè¿éå å·æ¥å¼å§ä½¿ç¨è¿ä¸ªå·ã"
+
+# c0df3d546d4d4bb0b749ee670e7fb1b4
+#: ../../storage.rst:293
+msgid "Uploading an Existing Volume to a Virtual Machine"
+msgstr "ä¸ä¼ ä¸ä¸ªå·²åå¨çå·ç»èææº"
+
+# 72bbdbadb3124063954e9e6fe0dce7a1
+#: ../../storage.rst:295
+msgid ""
+"Existing data can be made accessible to a virtual machine. This is called "
+"uploading a volume to the VM. For example, this is useful to upload data "
+"from a local file system and attach it to a VM. Root administrators, domain "
+"administrators, and end users can all upload existing volumes to VMs."
+msgstr
"å·²åå¨çæ°æ®ç°å¨å¯ä»¥è¢«èææºååãè¿ä¸ªè¢«ç§°ä¸ºä¸ä¼
ä¸ä¸ªå·å°VMãä¾å¦ï¼è¿å¯¹äºä»æ¬å°æ°æ®ç³»ç»ä¸ä¼
æ°æ®å¹¶å°æ°æ®éå
å°VMæ¯é常æç¨çãRoot管çåãå管çååç»ç«¯ç¨æ·é½å¯ä»¥ç»VMsä¸ä¼
å·²åå¨çå·ã"
+
+# c9884a3f983640be9c819a9386183862
+#: ../../storage.rst:301
+msgid ""
+"The upload is performed using HTTP. The uploaded volume is placed in the "
+"zone's secondary storage"
+msgstr "使ç¨HTTPä¸ä¼ ãä¸ä¼ çå·è¢«åå¨å¨åºåä¸çè¾
å©åå¨ä¸ã"
+
+# 0a24f9d7f649445ea6f882e03af5a4c8
+#: ../../storage.rst:304
+msgid ""
+"You cannot upload a volume if the preconfigured volume limit has already "
+"been reached. The default limit for the cloud is set in the global "
+"configuration parameter max.account.volumes, but administrators can also set"
+" per-domain limits that are different from the global default. See Setting "
+"Usage Limits"
+msgstr "å¦æé¢é
ç½®çå·å·²ç»è¾¾å°äºä¸éçè¯ï¼é£ä¹ä½
å°±ä¸è½ä¸ä¼ å·äºãé»è®¤çéå¶å¨å
¨å±é
ç½®åæ°max.account.volumesä¸è®¾ç½®ï¼ä½æ¯ç®¡çååæ
·å¯ä»¥ä¸ºæ¯ä¸ªç¨æ·å设置ä¸åäºå
¨å±é»è®¤çä¸éå¼ã请åé
设置使ç¨éå¶ã"
+
+# 561cea17ed5b4aa18cea478b6159261d
+#: ../../storage.rst:310
+msgid "To upload a volume:"
+msgstr "è¦ä¸ä¼ ä¸ä¸ªå·ï¼"
+
+# 5ca6925a182745bd92c83dddbb7ae803
+#: ../../storage.rst:314
+msgid ""
+"(Optional) Create an MD5 hash (checksum) of the disk image file that you are"
+" going to upload. After uploading the data disk, CloudStack will use this "
+"value to verify that no data corruption has occurred."
+msgstr "(å¯é项)为å°è¦ä¸ä¼ çç£çéåæ件å建ä¸ä¸ªMD5åå¸(æ
¡éª)ãåä¸ä¼ æ°æ®ç£çä¹åï¼CloudStackå°ä½¿ç¨è¿ä¸ªæ
¡éªå¼æ¥æ£æ¥è¿ä¸ªç£çæ件åä¸ä¼ è¿ç¨ä¸æ²¡æåºéã"
+
+# 51a89894bbfa4442971f72c7e46f146d
+#: ../../storage.rst:320
+msgid "Log in to the CloudStack UI as an administrator or user"
+msgstr "ç¨ç®¡çåæç¨æ·è´¦å·ç»å½CloudStack UI"
+
+# 28723898f3674b00ab161be4a747617a
+#: ../../storage.rst:328
+msgid "Click Upload Volume."
+msgstr "ç¹å»ä¸ä¼ å·ã"
+
+# ad72af46b40f4b3f9a6e531498b729d9
+#: ../../storage.rst:332
+msgid "Provide the following:"
+msgstr "å¡«å以ä¸å
容ï¼"
+
+# 2171cbf4083742a8be4e3f3d6905d0da
+#: ../../storage.rst:336
+msgid ""
+"Name and Description. Any desired name and a brief description that can be "
+"shown in the UI."
+msgstr "å称åæè¿°ãä½
æ³è¦çä»»ä½å称åä¸ä¸ªç®æ´çæè¿°ï¼è¿äºé½ä¼æ¾ç¤ºå¨UIä¸ã"
+
+# 4cd509c15df24aba8666b7d11a9051be
+#: ../../storage.rst:341
+msgid ""
+"Availability Zone. Choose the zone where you want to store the volume. VMs "
+"running on hosts in this zone can attach the volume."
+msgstr "å¯ç¨çåºåï¼éæ©ä½
æ³åå¨å·çåºåãè¿è¡å¨è¯¥åºåä¸ç主æºä¸çVMsé½å¯ä»¥éå
è¿ä¸ªå·ã"
+
+# 17cb7725cd0940cf8557d68206638aea
+#: ../../storage.rst:346
+msgid ""
+"Format. Choose one of the following to indicate the disk image format of the"
+" volume."
+msgstr "æ ¼å¼ãå¨ä¸é¢ææåºçå·çç£çéåæ
¼å¼ä¸éæ©ä¸ç§ã"
+
+# e4c763d0aab143b49ab6e9d9a1432df0
+#: ../../storage.rst:350
+msgid "Hypervisor"
+msgstr "Hypervisor"
+
+# 31ec3ac278d44770a548d99764119364
+#: ../../storage.rst:350
+msgid "Disk Image Format"
+msgstr "ç£çéåæ ¼å¼"
+
+# 81cdbfc4baaf46318a9b1031f0aa73f9
+#: ../../storage.rst:352
+msgid "XenServer"
+msgstr "XenServer"
+
+# b0ad57c5512c4cfda950f3a262618a6d
+#: ../../storage.rst:353
+msgid "VMware"
+msgstr "VMware"
+
+# 9714e77ca28e441fad02c1f395a121a9
+#: ../../storage.rst:353
+msgid "OVA"
+msgstr "OVA"
+
+# 1c6d0308a6d84f04909361d7d827e4a8
+#: ../../storage.rst:359
+msgid ""
+"URL. The secure HTTP or HTTPS URL that CloudStack can use to access your "
+"disk. The type of file at the URL must match the value chosen in Format. For"
+" example, if Format is VHD, the URL might look like the following:"
+msgstr "URLãCloudStackç¨æ¥è®¿é®ä½ çç£ççå®å
¨HTTPæHTTPS
URLãURL对åºçæ件ç§ç±»å¿
须符åå¨æ ¼å¼ä¸éæ©çãä¾å¦ï¼æ
¼å¼ä¸ºVHDï¼åURLå¿
é¡»åä¸é¢çï¼"
+
+# f0494cc84c3941c1b9f499bc95a77793
+#: ../../storage.rst:364
+msgid "``http://yourFileServerIP/userdata/myDataDisk.vhd``"
+msgstr "``http://yourFileServerIP/userdata/myDataDisk.vhd``"
+
+# 459a453424c84b51abfd379a935f578d
+#: ../../storage.rst:368
+msgid "MD5 checksum. (Optional) Use the hash that you created in step 1."
+msgstr "MD5æ ¡éªã(å¯é项)使ç¨å¨æ¥éª¤1ä¸å建çåå¸ã"
+
+# 8532aa0d890a4990b343381696efe17b
+#: ../../storage.rst:372
+msgid ""
+"Wait until the status of the volume shows that the upload is complete. Click"
+" Instances - Volumes, find the name you specified in step 5, and make sure "
+"the status is Uploaded."
+msgstr "çå°å·çä¸ä¼ æ¾ç¤ºå®æãç¹å»å®ä¾-å·ï¼æ¾å°ä½
å¨æ¥éª¤5ä¸æå®çå称ï¼ååç¡®ä¿ç¶ææ¯å·²ä¸ä¼ ã"
+
+# 9d29a7dddfd446539ef41dfcdf464ca1
+#: ../../storage.rst:377
+msgid "Attaching a Volume"
+msgstr "éå ä¸ä¸ªå·"
+
+# 59c5f93ac3c64d8eb008975ca5b66300
+#: ../../storage.rst:379
+msgid ""
+"You can attach a volume to a guest VM to provide extra disk storage. Attach "
+"a volume when you first create a new volume, when you are moving an existing"
+" volume from one VM to another, or after you have migrated a volume from one"
+" storage pool to another."
+msgstr "ä½ å¯ä»¥éè¿éå
ä¸ä¸ªå·æ¥æä¾èææºçé¢å¤ç£çåå¨ãå½ä½
第ä¸æ¬¡å建æ°å·ï¼æ移å¨å·²åå¨çå·å°å¦ä¸å°èææºï¼æå®å¨ä»å¦ä¸ä¸ªåå¨æ±
è¿ç§»è¿æ¥ä¸ä¸ªå·çæ¶åä½ æå¯ä»¥éå ä¸ä¸ªå·ã"
+
+# d2666c87d93e413aa8dd8637e062f650
+#: ../../storage.rst:390
+msgid "In the left navigation, click Storage."
+msgstr "å¨å·¦ä¾§å¯¼èªæ ç¹å»åå¨ã"
+
+# 60b9f463a5304e338a5505ef7d8c64ff
+#: ../../storage.rst:398
+msgid ""
+"Click the volume name in the Volumes list, then click the Attach Disk button"
+" |AttachDiskButton.png|"
+msgstr "å¨å·å表ä¸ç¹å»å·çå称ï¼ç¶åç¹å»éå ç£çæé®
|AttachDiskButton.png|"
+
+# 9b1bee8f6c564762854e98112c13ef30
+#: ../../storage.rst:403
+msgid ""
+"In the Instance popup, choose the VM to which you want to attach the volume."
+" You will only see instances to which you are allowed to attach volumes; for"
+" example, a user will see only instances created by that user, but the "
+"administrator will have more choices."
+msgstr "å¨å¼¹åºçå®ä¾çé¢ï¼éæ©ä½ æç®éå
å·çé£å°èææºãä½ åªè½çå°å
è®¸ä½ éå
å·çå®ä¾ï¼æ¯å¦ï¼æ®éç¨æ·åªè½çå°ä»èªå·±å建çå®ä¾ï¼è管çåå°ä¼ææ´å¤çéæ©ã"
+
+# 650c85a5e1254efdb77ed55f37f13a14
+#: ../../storage.rst:410
+msgid ""
+"When the volume has been attached, you should be able to see it by clicking "
+"Instances, the instance name, and View Volumes."
+msgstr "å½å·è¢«éå ä¹åï¼ä½
éè¿ç¹å»å®ä¾çå°å®ä¾åå该å®ä¾æéå çå·ã"
+
+# 3a8754012f8f4da98dfba46033984c06
+#: ../../storage.rst:414
+msgid "Detaching and Moving Volumes"
+msgstr "å¸è½½å移å¨å·"
+
+# 15d989ccfe6b43b28260c1289e7388cf
+#: ../../storage.rst:417
+msgid ""
+"This procedure is different from moving volumes from one storage pool to "
+"another as described in `âVM Storage Migrationâ
<#vm-storage-migration>`_."
+msgstr "è¿ä¸ªè¿ç¨ä¸åäºä»ä¸ä¸ªåå¨æ± 移å¨å·å°å
¶ä»çæ±
ãè¿äºå
å®¹å¨ `âVMåå¨è¿ç§»â
<#vm-storage-migration>`_ä¸ææè¿°ã"
+
+# 8bf4295f191343829d3c43e858b12a1e
+#: ../../storage.rst:419
+msgid ""
+"A volume can be detached from a guest VM and attached to another guest. Both"
+" CloudStack administrators and users can detach volumes from VMs and move "
+"them to other VMs."
+msgstr "å·å¯ä»¥ä»æ¥å®¾èæºä¸å¸è½½åéå å°å
¶ä»æ¥å®¾èæºä¸ãCloudStack管çååç¨æ·é½è½ä»VMsä¸å¸è½½å·åç»å
¶ä»VMséå ä¸ã"
+
+# 4b538b0a7328497cbb1de6c4dd724eb2
+#: ../../storage.rst:423
+msgid ""
+"If the two VMs are in different clusters, and the volume is large, it may "
+"take several minutes for the volume to be moved to the new VM."
+msgstr
"å¦æ两个VMsåå¨äºä¸åç群éä¸ï¼å¹¶ä¸å·å¾å¤§ï¼é£ä¹å·ç§»å¨è³æ°çVMä¸å¯è½è¦èè´¹æ¯è¾é¿çæ¶é´ã"
+
+# 39d64a096df5469ba79c30f50856d546
+#: ../../storage.rst:432
+msgid ""
+"In the left navigation bar, click Storage, and choose Volumes in Select "
+"View. Alternatively, if you know which VM the volume is attached to, you can"
+" click Instances, click the VM name, and click View Volumes."
+msgstr "å¨å·¦ä¾§ç导èªæ
ï¼ç¹å»åå¨ï¼å¨éæ©è§å¾ä¸éæ©å·ãæè
ï¼å¦æä½
ç¥éå·è¦éå ç»åªä¸ªVMçè¯ï¼ä½
å¯ä»¥ç¹å»å®ä¾ï¼åç¹å»VMå称ï¼ç¶åç¹å»æ¥çå·ã"
+
+# 791d977d23984313b80ffa2aa915577d
+#: ../../storage.rst:439
+msgid ""
+"Click the name of the volume you want to detach, then click the Detach Disk "
+"button. |DetachDiskButton.png|"
+msgstr "ç¹å»ä½ æ³å¸è½½å·çååï¼ç¶åç¹å»å¸è½½ç£çæé®ã
|DetachDiskButton.png|"
+
+# 6b7ea67796a948bbbaeab6bbdaf832c2
+#: ../../storage.rst:444
+msgid ""
+"To move the volume to another VM, follow the steps in `âAttaching a
Volumeâ "
+"<#attaching-a-volume>`_."
+msgstr "è¦ç§»å¨å·è³å
¶ä»VMï¼æç
§`âéå å·â
<#attaching-a-volume>`_ä¸çæ¥éª¤ã"
+
+# 6597c6aaa3fc40f89bdac966f91c4642
+#: ../../storage.rst:448
+msgid "VM Storage Migration"
+msgstr "VMåå¨è¿ç§»"
+
+# 98aac3254e864e4dad42c3824a081f9d
+#: ../../storage.rst:450
+msgid "Supported in XenServer, KVM, and VMware."
+msgstr "æ¯æXenServerãKVMåVMwareã"
+
+# 04f285a0220743a19e6d3a7b54226766
+#: ../../storage.rst:453
+msgid ""
+"This procedure is different from moving disk volumes from one VM to another "
+"as described in `âDetaching and Moving Volumesâ <#detaching-and-moving-"
+"volumes>`_."
+msgstr
"è¿ä¸ªè¿ç¨ä¸åäºä»ä¸ä¸ªèææºç§»å¨ç£çå·å°å¦å¤çèææºãè¿äºå
å®¹å¨ \"æ¥çæè½½å移å¨å·\"
<#detaching-and-moving-volumes>`_ä¸ææè¿°ã"
+
+# 63619f4ba4c14b57b06d4b6f34cad27e
+#: ../../storage.rst:455
+msgid ""
+"You can migrate a virtual machineâs root disk volume or any additional data
"
+"disk volume from one storage pool to another in the same zone."
+msgstr "ä½ å¯ä»¥ä»åä¸åºåä¸çä¸ä¸ªåå¨æ±
è¿ç§»èæºçrootç£çå·æä»»ä½å
¶ä»çæ°æ®ç£çå·å°å
¶ä»çæ± "
+
+# b374aaa7079f4357a3f44ea9e8d06dfc
+#: ../../storage.rst:458
+msgid ""
+"You can use the storage migration feature to achieve some commonly desired "
+"administration goals, such as balancing the load on storage pools and "
+"increasing the reliability of virtual machines by moving them away from any "
+"storage pool that is experiencing issues."
+msgstr "ä½ å¯ä»¥ä½¿ç¨åå¨è¿ç§»åè½å®æä¸äºå¸¸ç¨ç管çç®æ
ãå¦å°å®ä»¬ä»æé®é¢çåå¨æ± ä¸è¿ç§»åºå»ä»¥å¹³è¡¡åå¨æ±
çè´è½½åå¢å èææºçå¯é æ§ã"
+
+# 04abf2b317084f6a948d7878ad1cbaf4
+#: ../../storage.rst:463
+msgid ""
+"On XenServer and VMware, live migration of VM storage is enabled through "
+"CloudStack support for XenMotion and vMotion. Live storage migration allows "
+"VMs to be moved from one host to another, where the VMs are not located on "
+"storage shared between the two hosts. It provides the option to live migrate"
+" a VMâs disks along with the VM itself. It is possible to migrate a VM from
"
+"one XenServer resource pool / VMware cluster to another, or to migrate a VM "
+"whose disks are on local storage, or even to migrate a VMâs disks from one "
+"storage repository to another, all while the VM is running."
+msgstr
"å¨XenServeråVMwareä¸ï¼ç±äºCloudStackæ¯æXenMotionåvMotionï¼VMåå¨çå¨çº¿è¿ç§»æ¯å¯ç¨çãå¨çº¿åå¨è¿ç§»å
许没æå¨å
±äº«åå¨ä¸çVMsä»ä¸å°ä¸»æºè¿ç§»å°å¦ä¸å°ä¸»æºä¸ãå®æä¾é项让VMçç£çä¸VMæ¬èº«ä¸èµ·å¨çº¿è¿ç§»ãå®è®©XenServerèµæºæ±
ä¹é´/VMware群éä¹é´è¿ç§»VMï¼æè
å¨æ¬å°åå¨è¿è¡çVMï¼çè³æ¯åå¨åºä¹é´è¿ç§»VMçç£çæ为å¯è½ï¼èä¸è¿ç§»åæ¶VMæ¯æ£å¨è¿è¡çã"
+
+# def78c6283b8421d959acad05e006b34
+#: ../../storage.rst:474
+msgid ""
+"Because of a limitation in VMware, live migration of storage for a VM is "
+"allowed only if the source and target storage pool are accessible to the "
+"source host; that is, the host where the VM is running when the live "
+"migration operation is requested."
+msgstr "ç±äºVMwareä¸çéå¶ï¼ä»
å½æºåç®æ åå¨æ±
é½è½è¢«æºä¸»æºè®¿é®æ¶æå
许VMåå¨çå¨çº¿è¿ç§»ï¼ä¹å°±æ¯è¯´ï¼å½éè¦å¨çº¿è¿ç§»æä½æ¶ï¼æºä¸»æºæ¯è¿è¡VMç主æºã"
+
+# c24b72459dde481887747acf4697636f
+#: ../../storage.rst:477
+msgid "Migrating a Data Volume to a New Storage Pool"
+msgstr "å°æ°æ®å·è¿ç§»å°æ°çåå¨æ± "
+
+# bf37a69a59b542a98c25363d45c9713d
+#: ../../storage.rst:479
+msgid "There are two situations when you might want to migrate a disk:"
+msgstr "å½ä½ æ³è¿ç§»ç£ççæ¶åå¯è½æ两ç§æ
åµï¼"
+
+# f8b93cb34c384463a75f83959616871f
+#: ../../storage.rst:483
+msgid ""
+"Move the disk to new storage, but leave it attached to the same running VM."
+msgstr "å°ç£ç移å¨å°æ°çåå¨ï¼ä½æ¯è¿å°å
¶éå
å¨åæ¥æ£å¨è¿è¡çVMä¸ã"
+
+# 5cbf239bfd444a0f9c842deb94ed82ba
+#: ../../storage.rst:488
+msgid ""
+"Detach the disk from its current VM, move it to new storage, and attach it "
+"to a new VM."
+msgstr "ä»å½åVMä¸å¸è½½ç£çï¼ç¶åå°å
¶ç§»å¨è³æ°çåå¨ï¼åå°å
¶éå è³æ°çVMã"
+
+# 0d25ecabc349499586d885844f71b1d0
+#: ../../storage.rst:492
+msgid "Migrating Storage For a Running VM"
+msgstr "为æ£å¨è¿è¡çVMè¿ç§»åå¨"
+
+# 245dccc84a4e456990f3c0543c4fead3
+#: ../../storage.rst:494
+msgid "(Supported on XenServer and VMware)"
+msgstr "(æ¯æXenServeråVMware)"
+
+# 30cb4d3d0e4e417492b11e590eb7f765
+#: ../../storage.rst:502
+msgid ""
+"In the left navigation bar, click Instances, click the VM name, and click "
+"View Volumes."
+msgstr "å¨å·¦ä¾§ç导èªæ
ï¼ç¹å»å®ä¾ï¼åç¹å»VMåï¼æ¥çç¹å»æ¥çå·ã"
+
+# 5277b5ef2825414da877bab6feb8fc04
+#: ../../storage.rst:507
+msgid "Click the volume you want to migrate."
+msgstr "ç¹å»ä½ æ³è¿ç§»çå·ã"
+
+# d0f6564fdaf8483e98845c58dc25c962
+# 9e933a0caa314e80b7270c72c0de6271
+#: ../../storage.rst:511 ../../storage.rst:533
+msgid ""
+"Detach the disk from the VM. See `âDetaching and Moving Volumesâ "
+"<#detaching-and-moving-volumes>`_ but skip the âreattachâ step at the
end. "
+"You will do that after migrating to new storage."
+msgstr "ä»VMå¸è½½ç£çã请åé
`âå¸è½½å移å¨å·â
<#detaching-and-moving-volumes>`_ ä½æ¯è·³è¿æåç\"éæ°éå
\"æ¥éª¤ãä½ ä¼å¨è¿ç§»è¿åå¨æ°çåå¨ä¸åè¿ä¸æ¥ã"
+
+# 4649c894198146228bfeed3a1fb7bdd4
+# 27b66737b23347a0926867a4c198ca1a
+#: ../../storage.rst:517 ../../storage.rst:539
+msgid ""
+"Click the Migrate Volume button |Migrateinstance.png| and choose the "
+"destination from the dropdown list."
+msgstr "ç¹å»è¿ç§»å·æé® |Migrateinstance.png|
ï¼ç¶åä»ä¸æå表éé¢éæ©ç®æ ä½ç½®ã"
+
+# 8b7c76c6d3b441929912e2e36ceb34d2
+#: ../../storage.rst:521
+msgid ""
+"Watch for the volume status to change to Migrating, then back to Ready."
+msgstr
"è¿æé´ï¼å·çç¶æä¼åææ£å¨è¿ç§»ï¼ç¶åååå已就绪ã"
+
+# 3ffc696411bf46ccacd94ed84c8a4e52
+#: ../../storage.rst:525
+msgid "Migrating Storage and Attaching to a Different VM"
+msgstr "è¿ç§»åå¨åéå å°ä¸åçVM"
+
+# 451b903ccdf9481a87553a1ff3e11675
+#: ../../storage.rst:543
+msgid ""
+"Watch for the volume status to change to Migrating, then back to Ready. You "
+"can find the volume by clicking Storage in the left navigation bar. Make "
+"sure that Volumes is displayed at the top of the window, in the Select View "
+"dropdown."
+msgstr
"è§å¯å·çç¶æä¼åææ£å¨è¿ç§»ï¼ç¶åååå已就绪ãä½
å¯ä»¥ç¹å»å·¦ä¾§å¯¼èªæ¡ä¸çåå¨æ¾å°å·ãå¨éæ©æ¥ççä¸æå表ä¸ï¼ç¡®ä¿å·æ¾ç¤ºå¨çªå£ç顶é¨ã"
+
+# 3ea9addc45e04febafb2f43888f9756a
+#: ../../storage.rst:550
+msgid ""
+"Attach the volume to any desired VM running in the same cluster as the new "
+"storage server. See `âAttaching a Volumeâ <#attaching-a-volume>`_"
+msgstr
"å¨æ°çåå¨æå¡å¨ä¸ç»è¿è¡å¨åä¸ç¾¤éä¸çä»»ä½æ³è¦çVMéå
å·ã请åé
`âéå å·â <#attaching-a-volume>`_ã"
+
+# 4f145ec0a8e9487a840dd7dbd6858579
+#: ../../storage.rst:555
+msgid "Migrating a VM Root Volume to a New Storage Pool"
+msgstr "è¿ç§»VMçRootå·å°æ°çåå¨æ± "
+
+# b1b1c5f5444e43bcb759263c0dc47036
+#: ../../storage.rst:557
+msgid ""
+"(XenServer, VMware) You can live migrate a VM's root disk from one storage "
+"pool to another, without stopping the VM first."
+msgstr "(XenServerãVMware)ä½ å¯ä»¥å¨åæ¢VMçæ
åµä¸ï¼ä½¿ç¨å¨çº¿è¿ç§»å°VMçrootç£çä»ä¸ä¸ªåå¨æ±
移å¨å°å¦å¤ä¸ä¸ªã"
+
+# df65430daab94ed197c4acaf08a3e836
+#: ../../storage.rst:560
+msgid ""
+"(KVM) When migrating the root disk volume, the VM must first be stopped, and"
+" users can not access the VM. After migration is complete, the VM can be "
+"restarted."
+msgstr "(KVM)å½åå·²rootç£çå·çæ¶åï¼VMå¿
é¡»å
³æºï¼è¿æ¶ç¨æ·ä¸è½è®¿é®VMãå¨è¿ç§»å®æä¹åï¼VMå°±è½éå¯äºã"
+
+# be481dbccbbf4cb0a2cc056284533ab8
+#: ../../storage.rst:570
+msgid "In the left navigation bar, click Instances, and click the VM name."
+msgstr "å¨å·¦ä¾§ç导èªæ éï¼ç¹å»å®ä¾ï¼ç¶åç¹å»VMåã"
+
+# 5274bdb89bf44ea3b038c790ff8672b4
+#: ../../storage.rst:574
+msgid "(KVM only) Stop the VM."
+msgstr "(ä»
éäºKVM)åæ¢VMã"
+
+# 33f3864ed4d34fb881cc89f632512d88
+#: ../../storage.rst:578
+msgid ""
+"Click the Migrate button |Migrateinstance.png| and choose the destination "
+"from the dropdown list."
+msgstr "ç¹å»è¿ç§»æé® |Migrateinstance.png|
ï¼ç¶åä»ä¸æå表ä¸éæ©ç®æ ä½ç½®ã"
+
+# 3349af1b38e4457482c539813e40a08c
+#: ../../storage.rst:581
+msgid ""
+"If the VM's storage has to be migrated along with the VM, this will be noted"
+" in the host list. CloudStack will take care of the storage migration for "
+"you."
+msgstr "å¦æVMçåå¨ä¸VMå¿
é¡»ä¸èµ·è¢«è¿ç§»ï¼è¿ç¹ä¼å¨ä¸»æºå表ä¸æ 注ãCloudStackä¼ä¸ºä½
èªå¨çè¿è¡åå¨è¿ç§»ã"
+
+# 4445e369263d4d0dafd72fe3666283a7
+#: ../../storage.rst:585
+msgid ""
+"Watch for the volume status to change to Migrating, then back to Running (or"
+" Stopped, in the case of KVM). This can take some time."
+msgstr "è§å¯å·çç¶æä¼åæè¿ç§»ä¸ï¼ç¶åååè¿è¡ä¸(æè
åæ¢ï¼å¨KVMä¸)ãè¿è¿ç¨ä¼æç»ä¸æ®µæ¶é´ã"
+
+# e1ad558399c64d8aa7ae00633e58c413
+#: ../../storage.rst:590
+msgid "(KVM only) Restart the VM."
+msgstr "(ä»
éäºKVM)éå¯VMã"
+
+# a5eee53b3a944bf88fba18aafe337287
+#: ../../storage.rst:593
+msgid "Resizing Volumes"
+msgstr "éæ°è§åå·"
+
+# a5d5757162eb492aaf50d2220d1d1b98
+#: ../../storage.rst:595
+msgid ""
+"CloudStack provides the ability to resize data disks; CloudStack controls "
+"volume size by using disk offerings. This provides CloudStack administrators"
+" with the flexibility to choose how much space they want to make available "
+"to the end users. Volumes within the disk offerings with the same storage "
+"tag can be resized. For example, if you only want to offer 10, 50, and 100 "
+"GB offerings, the allowed resize should stay within those limits. That "
+"implies if you define a 10 GB, a 50 GB and a 100 GB disk offerings, a user "
+"can upgrade from 10 GB to 50 GB, or 50 GB to 100 GB. If you create a custom-"
+"sized disk offering, then you have the option to resize the volume by "
+"specifying a new, larger size."
+msgstr
"CloudStackæä¾äºè°æ´æ°æ®ç大å°çåè½ï¼CloudStackåå©ç£çæ¹æ¡æ§å¶å·å¤§å°ãè¿æ
·æä¾äºCloudStack管çåå¯ä»¥çµæ´»å°éæ©ä»ä»¬æ³è¦ç»ç»ç«¯ç¨æ·å¤å°å¯ç¨ç©ºé´ã使ç¨ç¸ååå¨æ
ç¾çç£çæ¹æ¡ä¸çå·å¯ä»¥éæ°ååãæ¯å¦ï¼å¦æä½
åªæ³æä¾ 10,50å100GBçæ¹æ¡ï¼éæ°ååå
许çæéå°±ä¸ä¼è¶
è¿è¿äºãä¹å°±æ¯è¯´ï¼å¦æä½
å®ä¹äº10GBï¼50GBå100GBçç£çæ¹æ¡ï¼ç¨æ·å¯ä»¥ä»10GBå级å°50GBï¼æè
ä»50GBå级å°100GBãå¦æä½
å建äºèªå®ä¹å¤§å°çç£çæ¹æ¡ï¼é£ä¹ä½
å¯ä»¥éæ°è§åå·ç大å°ä¸ºæ´å¤§çå¼ã"
+
+# 4d2a89a3e81a4293bb926db2952a2d89
+#: ../../storage.rst:606
+msgid ""
+"Additionally, using the resizeVolume API, a data volume can be moved from a "
+"static disk offering to a custom disk offering with the size specified. This"
+" functionality allows those who might be billing by certain volume sizes or "
+"disk offerings to stick to that model, while providing the flexibility to "
+"migrate to whatever custom size necessary."
+msgstr "å¦å¤ï¼ä½¿ç¨ resizeVolume
APIï¼æ°æ®å·å¯ä»¥ä»ä¸ä¸ªéæç£çæ¹æ¡ç§»å¨å°æå®å¤§å°çèªå®ä¹ç£çæ¹æ¡ãæ¤åè½å
对ç¹å®å®¹éæç£çæ¹æ¡è¿è¡æ¶è´¹ï¼åæ¶å¯ä»¥çµæ´»å°æ´æ¹ç£ç大å°ã"
+
+# 0c302b9381964652a04f4931be9a3656
+#: ../../storage.rst:612
+msgid ""
+"This feature is supported on KVM, XenServer, and VMware hosts. However, "
+"shrinking volumes is not supported on VMware hosts."
+msgstr "KVM,
XenServeråVMware主æºæ¯æè¿ä¸ªåè½ãä½æ¯VMware主æºä¸æ¯æå·çæ¶ç¼©ã"
+
+# c9bd7905071a44d4a6f7905fee796c4a
+#: ../../storage.rst:615
+msgid "Before you try to resize a volume, consider the following:"
+msgstr "å¨ä½ è¯å¾éæ°è§åå·å¤§å°ä¹åï¼è¯·èè以ä¸å ç¹ï¼"
+
+# 82c7ca43b93f4df09d6c513ff5b70486
+#: ../../storage.rst:619
+msgid "The VMs associated with the volume are stopped."
+msgstr "ä¸å·å
³èçVMsæ¯åæ¢ç¶æã"
+
+# 6f5a21b47e314e0484db238c6e9c457e
+#: ../../storage.rst:623
+msgid "The data disks associated with the volume are removed."
+msgstr "ä¸å·å
³èçæ°æ®ç£çå·²ç»ç§»é¤äºã"
+
+# fec276a1979249fb9dc124f18e1455bd
+#: ../../storage.rst:627
+msgid ""
+"When a volume is shrunk, the disk associated with it is simply truncated, "
+"and doing so would put its content at risk of data loss. Therefore, resize "
+"any partitions or file systems before you shrink a data disk so that all the"
+" data is moved off from that disk."
+msgstr
"å½å·ç¼©å°çæ¶åï¼ä¸é¢çç£çä¼è¢«æªæï¼è¿ä¹åçè¯å¯è½ä¼ä¸¢å¤±æ°æ®ãå
æ¤ï¼å¨ç¼©å°æ°æ®ç£çä¹åï¼éæ°è§åä»»ä½ååºææ件系ç»ä»¥ä¾¿æ°æ®è¿ç§»åºè¿ä¸ªç£çã"
+
+# 91d00577aa774910bc0090ff9249c714
+#: ../../storage.rst:632
+msgid "To resize a volume:"
+msgstr "è¦éæ°è§åå·å®¹éï¼"
+
+# b8b72eb1404144179334e6914f461c43
+#: ../../storage.rst:648
+msgid ""
+"Select the volume name in the Volumes list, then click the Resize Volume "
+"button |resize-volume-icon.png|"
+msgstr "å¨å·å表ä¸éæ©å·å称ï¼ç¶åç¹å»è°æ´å·å¤§å°æé®
|resize-volume-icon.png|"
+
+# 6d5f5f78fa294317aaaa19d07403aa91
+#: ../../storage.rst:653
+msgid ""
+"In the Resize Volume pop-up, choose desired characteristics for the storage."
+msgstr
"å¨å¼¹åºçè°æ´å·å¤§å°çªå£ä¸ï¼ä¸ºåå¨éæ©æ³è¦çæ¹æ¡ã"
+
+# 94cd2f322cad423485fcfdaa0afe2ab4
+#: ../../storage.rst:656
+msgid "|resize-volume.png|"
+msgstr "|resize-volume.png|"
+
+# 7f15ee5fc83e482791baa84599ea55cd
+#: ../../storage.rst:660
+msgid "If you select Custom Disk, specify a custom size."
+msgstr "å¦æä½ éæ©èªå®ä¹ç£çï¼è¯·æå®ä¸ä¸ªèªå®ä¹å¤§å°ã"
+
+# 138ac0a6338b4b0f95312bd4dec59967
+#: ../../storage.rst:664
+msgid "Click Shrink OK to confirm that you are reducing the size of a volume."
+msgstr "ç¹å»æ¯å¦ç¡®å®è¦ç¼©å°å·å¤§å°æ¥ç¡®è®¤ä½ è¦ç¼©å°ç容éã"
+
+# 7381eca9284e4773a1bb46e44a3865aa
+#: ../../storage.rst:667
+msgid ""
+"This parameter protects against inadvertent shrinking of a disk, which might"
+" lead to the risk of data loss. You must sign off that you know what you are"
+" doing."
+msgstr "æ¤åæ°é¿å
äºä¸å°å¿ç失误é ææ°æ®ç丢失ãä½ å¿
é¡»ç¥éä½ å¨åä»ä¹ã"
+
+# eed53ca74872424ea49fa6cabfbe9eb9
+#: ../../storage.rst:673
+msgid "Click OK."
+msgstr "ç¹å»ç¡®å®ã"
+
+# 5d71f55bf71c4f58beb781d2cb94c74c
+#: ../../storage.rst:676
+msgid "Reset VM to New Root Disk on Reboot"
+msgstr "å¨VMéå¯æ¶é设VMçrootç"
+
+# 2521d69d348047fc8a660b3b8a77b2ac
+#: ../../storage.rst:678
+msgid ""
+"You can specify that you want to discard the root disk and create a new one "
+"whenever a given VM is rebooted. This is useful for secure environments that"
+" need a fresh start on every boot and for desktops that should not retain "
+"state. The IP address of the VM will not change due to this operation."
+msgstr "ä½ å¯ä»¥æå®ä½
æ³è¦æ¾å¼çrootç£çåå建ä¸ä¸ªæ°çï¼å¹¶ä¸æ
论ä½æ¶å¨VMéå¯æ¶é½ä½¿ç¨æ°çãæ¯æ¬¡å¯å¨æ¶é½æ¯ä¸ä¸ªå
¨æ°çVM并ä¸æ¡é¢ä¸éè¦ä¿åå®çç¶æï¼åºäºå®å
¨ç¯å¢èèè¿é常æç¨ãVMçIPå°åå¨è¿ä¸ªæä½æé´ä¸ä¼æ¹åã"
+
+# f9ff6cc586b8407c9ff1e235962ec731
+#: ../../storage.rst:684
+msgid "**To enable root disk reset on VM reboot:**"
+msgstr "**è¦å¯ç¨å¨VMéå¯æ¶éç½®rootç£çï¼**"
+
+# 0c3f06afe8504df7a4bf4ea7bea288bf
+#: ../../storage.rst:686
+msgid ""
+"When creating a new service offering, set the parameter isVolatile to True. "
+"VMs created from this service offering will have their disks reset upon "
+"reboot. See `âCreating a New Compute Offeringâ "
+"<service_offerings.html#creating-a-new-compute-offering>`_."
+msgstr
"å½å建ä¸ä¸ªæ°çæå¡æ¹æ¡æ¶ï¼è®¾ç½®isVolatileè¿ä¸ªåæ°ä¸ºTrueãä»è¿ä¸ªæå¡æ¹æ¡å建çVMsä¸æ¦éå¯ï¼å®ä»¬çç£çå°±ä¼éç½®ã请åé
`âå建æ°ç计ç®æ¹æ¡â
<service_offerings.html#creating-a-new-compute-offering>`_ã"
+
+# d0e6735ecfb243bcb2038075c332453d
+#: ../../storage.rst:692
+msgid "Volume Deletion and Garbage Collection"
+msgstr "å·çå é¤ååæ¶"
+
+# 15a72eae3cce46e494793039f4408bc2
+#: ../../storage.rst:694
+msgid ""
+"The deletion of a volume does not delete the snapshots that have been "
+"created from the volume"
+msgstr "å é¤å·ä¸ä¼å é¤æ¾ç»å¯¹å·åçå¿«ç
§"
+
+# 484eaae26aa442b2af1e54dea16342d2
+#: ../../storage.rst:697
+msgid ""
+"When a VM is destroyed, data disk volumes that are attached to the VM are "
+"not deleted."
+msgstr "å½ä¸ä¸ªVM被éæ¯æ¶ï¼éå å°è¯¥VMçæ°æ®ç£çå·ä¸ä¼è¢«å
é¤ã"
+
+# 5fd59185e6d64db280b2ca89ecca3f89
+#: ../../storage.rst:700
+msgid ""
+"Volumes are permanently destroyed using a garbage collection process. The "
+"global configuration variables expunge.delay and expunge.interval determine "
+"when the physical deletion of volumes will occur."
+msgstr "使ç¨åæ¶ç¨åºåï¼å·å°±æ°¸ä¹
ç被éæ¯äºãå
¨å±é
ç½®åéexpunge.delayåexpunge.intervalå³å®äºä½æ¶ç©çå é¤å·ã"
+
+# 9111bfdfe0ee4dcf8c15a359324059fc
+#: ../../storage.rst:706
+msgid ""
+"`expunge.delay`: determines how old the volume must be before it is "
+"destroyed, in seconds"
+msgstr
"`expunge.delay`ï¼å³å®å¨å·è¢«éæ¯ä¹åå·åå¨å¤é¿æ¶é´ï¼ä»¥ç§è®¡ç®ã"
+
+# b9e7a6caccba433b8b23ff40bfd62123
+#: ../../storage.rst:711
+msgid ""
+"`expunge.interval`: determines how often to run the garbage collection check"
+msgstr "`expunge.interval`ï¼å³å®åæ¶æ£æ¥è¿è¡é¢çã"
+
+# 984f73d3efdc4d5a8a6a4309599cb386
+#: ../../storage.rst:714
+msgid ""
+"Administrators should adjust these values depending on site policies around "
+"data retention."
+msgstr "管çåå¯ä»¥æ ¹æ®ç«ç¹æ°æ®ä¿ççç¥æ¥è°æ´è¿äºå¼ã"
+
+# 08d9ad354b7942bda197e39303ab4ebe
+#: ../../storage.rst:718
+msgid "Working with Volume Snapshots"
+msgstr "使ç¨å·å¿«ç
§"
+
+# 9eb74cf1e4cd42df8c38300249e3e96b
+# a741a75990ca4897a4f34de220bde407
+#: ../../storage.rst:720 ../../storage.rst:773
+msgid ""
+"(Supported for the following hypervisors: **XenServer**, **VMware vSphere**,"
+" and **KVM**)"
+msgstr "(æ¯æ以ä¸hypervisorsï¼**XenServer**, **VMware vSphere** å
**KVM**)"
+
+# 673b7caff5fb4ce481e087a9877d4dcd
+#: ../../storage.rst:723
+msgid ""
+"CloudStack supports snapshots of disk volumes. Snapshots are a point-in-time"
+" capture of virtual machine disks. Memory and CPU states are not captured. "
+"If you are using the Oracle VM hypervisor, you can not take snapshots, since"
+" OVM does not support them."
+msgstr "CloudStackæ¯æç£çå·çå¿«ç
§ãå¿«ç
§ä¸ºèææºæä¸æ¶é´ç¹çæåãå
ååCPUç¶æä¸ä¼è¢«æåãå¦æä½ ä½¿ç¨Oracle VM hypervisorï¼é£ä¹ä½
ä¸è½åå¿«ç
§ï¼å 为OVMä¸æ¯æã"
+
+# d6369c41ebb54df68c691818f61104bb
+#: ../../storage.rst:728
+msgid ""
+"Snapshots may be taken for volumes, including both root and data disks "
+"(except when the Oracle VM hypervisor is used, which does not support "
+"snapshots). The administrator places a limit on the number of stored "
+"snapshots per user. Users can create new volumes from the snapshot for "
+"recovery of particular files and they can create templates from snapshots to"
+" boot from a restored disk."
+msgstr "å·ï¼å
æ¬rootåæ°æ®ç£ç(使ç¨Oracle VM hypervisoré¤å¤ï¼å
为OVMä¸æ¯æå¿«ç
§)é½å¯ä»¥åå¿«ç
§ã管çåå¯ä»¥éå¶æ¯ä¸ªç¨æ·çå¿«ç
§æ°éãç¨æ·å¯ä»¥éè¿å¿«ç
§å建æ°çå·ï¼ç¨æ¥æ¢å¤ç¹å®çæ件ï¼è¿å¯ä»¥éè¿å¿«ç
§å建模æ¿æ¥å¯å¨æ¢å¤çç£çã"
+
+# 90f5f9401836415abbc9e1620b3f224f
+#: ../../storage.rst:735
+msgid ""
+"Users can create snapshots manually or by setting up automatic recurring "
+"snapshot policies. Users can also create disk volumes from snapshots, which "
+"may be attached to a VM like any other disk volume. Snapshots of both root "
+"disks and data disks are supported. However, CloudStack does not currently "
+"support booting a VM from a recovered root disk. A disk recovered from "
+"snapshot of a root disk is treated as a regular data disk; the data on "
+"recovered disk can be accessed by attaching the disk to a VM."
+msgstr "ç¨æ·å¯ä»¥æå¨æè
设置èªå¨å¾ªç¯å¿«ç
§çç¥å建快ç
§ãç¨æ·ä¹å¯ä»¥ä»å¿«ç
§å建éç£çå·ï¼å¹¶åå
¶ä»ç£çå·ä¸æ
·éå å°èæºä¸ãrootåæ°æ®ç£çæ¯æå¿«ç
§ãä½æ¯ï¼CloudStackç®åä¸æ¯æéè¿æ¢å¤çrootçå¯å¨VMãä»å¿«ç
§æ¢å¤çrootçä¼è¢«è®¤ä¸ºæ¯æ°æ®çï¼æ¢å¤çç£çå¯ä»¥éå
å°VMä¸ä»¥è®¿é®ä¸é¢çæ°æ®ã"
+
+# 75d1388574754e729e8da27b3c799a49
+#: ../../storage.rst:744
+msgid ""
+"A completed snapshot is copied from primary storage to secondary storage, "
+"where it is stored until deleted or purged by newer snapshot."
+msgstr "å®æ´å¿«ç
§æ
§èªä¸»åå¨æ·è´å°éå
åå¨ï¼å¹¶ä¼ä¸ç´åå¨å¨éé¢ç¥éå é¤æ被æ°çå¿«ç
§è¦çã"
+
+# bada0f726d0d4d729aea533943c409b3
+#: ../../storage.rst:748
+msgid "How to Snapshot a Volume"
+msgstr "å¦ä½ç»å·åå¿«ç
§"
+
+# 44e0d512d57c4888a44a396a5eac75bb
+#: ../../storage.rst:752
+msgid "Log in to the CloudStack UI as a user or administrator."
+msgstr "æ¯ç¨ç¨æ·æè
管çåç»å½CloudStackã"
+
+# 19c4af0199724420aecc49d58b0f818e
+#: ../../storage.rst:760
+msgid "In Select View, be sure Volumes is selected."
+msgstr "å¨éæ©è§å¾ï¼ç¡®è®¤éæ©çæ¯å·ã"
+
+# 70d14f08edf540afaa590235f7ea2f8b
+#: ../../storage.rst:764
+msgid "Click the name of the volume you want to snapshot."
+msgstr "ç¹å»ä½ è¦åå¿«ç
§çå·çå称ã"
+
+# aed06a6f99fd4092a29f2307fa3f046b
+#: ../../storage.rst:768
+msgid "Click the Snapshot button. |SnapshotButton.png|"
+msgstr "ç¹å»å¿«ç
§æé®ã |SnapshotButton.png|"
+
+# 669a8d0c02184e369299bf9d7e5483d2
+#: ../../storage.rst:771
+msgid "Automatic Snapshot Creation and Retention"
+msgstr "å建åä¿çèªå¨å¿«ç
§"
+
+# 24d0d72217234a93a95c9d1990899367
+#: ../../storage.rst:776
+msgid ""
+"Users can set up a recurring snapshot policy to automatically create "
+"multiple snapshots of a disk at regular intervals. Snapshots can be created "
+"on an hourly, daily, weekly, or monthly interval. One snapshot policy can be"
+" set up per disk volume. For example, a user can set up a daily snapshot at "
+"02:30."
+msgstr "ç¨æ·å¯ä»¥è®¾ç½®å¾ªç¯å¿«ç
§çç¥æ¥èªå¨ç为ç£çå®æå°å建å¤ä¸ªå¿«ç
§ãå¿«ç
§å¯ä»¥æå°æ¶ï¼å¤©ï¼å¨æè
æ为å¨æãæ¯ä¸ªç£çå·é½å¯ä»¥è®¾ç½®å¿«ç
§çç¥ãæ¯å¦ï¼ç¨æ·å¯ä»¥è®¾ç½®æ¯å¤©ç02:30åå¿«ç
§ã"
+
+# 1ddc07ffd4744db3996a56928fabe1b3
+#: ../../storage.rst:782
+msgid ""
+"With each snapshot schedule, users can also specify the number of scheduled "
+"snapshots to be retained. Older snapshots that exceed the retention limit "
+"are automatically deleted. This user-defined limit must be equal to or lower"
+" than the global limit set by the CloudStack administrator. See `âGlobally "
+"Configured Limitsâ <usage.html#globally-configured-limits>`_. The limit "
+"applies only to those snapshots that are taken as part of an automatic "
+"recurring snapshot policy. Additional manual snapshots can be created and "
+"retained."
+msgstr "ä¾é æ¯ä¸ªå¿«ç
§è®¡åï¼ç¨æ·è¿å¯ä»¥æå®è®¡åå¿«ç
§çä¿çæ°éãè¶
åºä¿çæéçèå¿«ç
§ä¼è¢«èªå¨çå
é¤ãç¨æ·å®ä¹çéå¶å¿
é¡»çäºæå°äºCloudStack管çå设置çå
¨å±éå¶ã请åé
`âå
¨å±é
ç½®çéå¶â
<usage.html#globally-configured-limits>`_.ãéå¶åªè½åºç¨ç»ä½ä¸ºèªå¨å¾ªç¯å¿«ç
§çç¥çä¸é¨åçå¿«ç
§ãé¢å¤çæå¨å¿«ç
§è½è¢«å建åä¿çã"
+
+# cf33bcc958214dce8809b1b3664d8473
+#: ../../storage.rst:793
+msgid "Incremental Snapshots and Backup"
+msgstr "å¢éå¿«ç
§åå¤ä»½"
+
+# 790dbaf719ea4af3a21944492c933590
+#: ../../storage.rst:795
+msgid ""
+"Snapshots are created on primary storage where a disk resides. After a "
+"snapshot is created, it is immediately backed up to secondary storage and "
+"removed from primary storage for optimal utilization of space on primary "
+"storage."
+msgstr "å建çå¿«ç
§ä¿åå¨ç£çæå¨ç主åå¨ãå¨å建快ç
§ä¹åï¼å®ä¼ç«å³è¢«å¤ä»½å°è¾
å©åå¨å¹¶å¨ä¸»åå¨ä¸å
é¤ä»¥èç主åå¨ç空é´ã"
+
+# c6a3543640914d2abcb184e9f17b9a33
+#: ../../storage.rst:800
+msgid ""
+"CloudStack does incremental backups for some hypervisors. When incremental "
+"backups are supported, every N backup is a full backup."
+msgstr "CloudStackç»ä¸äº
hypervisorsåå¢éå¤ä»½ãå½ä½¿ç¨äºå¢éå¤ä»½ï¼é£ä¹æ¯Nå¤ä»½å°±æ¯ä¸ä¸ªå®å
¨å¤ä»½ã"
+
+# ca4450266a9e41e4ba1d71743809cc71
+#: ../../storage.rst:807
+msgid "Support incremental backup"
+msgstr "æ¯æå¢éå¤ä»½"
+
+# 05da4101c8ee4a32a1cd8e61dd902cca
+#: ../../storage.rst:812
+msgid "Volume Status"
+msgstr "å·ç¶æ"
+
+# 389f5943ba4c4e04be7c271c11632520
+#: ../../storage.rst:814
+msgid ""
+"When a snapshot operation is triggered by means of a recurring snapshot "
+"policy, a snapshot is skipped if a volume has remained inactive since its "
+"last snapshot was taken. A volume is considered to be inactive if it is "
+"either detached or attached to a VM that is not running. CloudStack ensures "
+"that at least one snapshot is taken since the volume last became inactive."
+msgstr "å½å¿«ç
§æä½æ¯ç±ä¸ä¸ªå¾ªç¯å¿«ç
§çç¥æå¼åçæ¶åï¼å¦æä»å
¶ä¸æ¬¡å建快ç
§åï¼å·ä¸ç´å¤äºéæ´»è·ç¶æï¼å¿«ç
§è¢«è·³è¿ãå¦æå·è¢«å离æéå
çèææºæ²¡æè¿è¡ï¼é£ä¹å®å°±è¢«è®¤ä¸ºæ¯éæ´»è·çãCloudStackä¼ç¡®ä¿ä»å·ä¸ä¸æ¬¡åå¾ä¸æ´»è·åï¼è³å°å建äºä¸ä¸ªå¿«ç
§ã"
+
+# f6a7564bbd474bb28c93ac2ef46e537f
+#: ../../storage.rst:821
+msgid ""
+"When a snapshot is taken manually, a snapshot is always created regardless "
+"of whether a volume has been active or not."
+msgstr "å½æå¨å建äºå¿«ç
§ï¼ä¸ç®¡æ¹å·æ¯ä¸æ¯æ´»è·çï¼å¿«ç
§ä¼ä¸ç´è¢«å建ã"
+
+# e1558eacbe2f4d768b7fe6e226152c57
+#: ../../storage.rst:825
+msgid "Snapshot Restore"
+msgstr "å¿«ç
§æ¢å¤"
+
+# eb2f5defecae49b9a0b4338cc0a8ff65
+#: ../../storage.rst:827
+msgid ""
+"There are two paths to restoring snapshots. Users can create a volume from "
+"the snapshot. The volume can then be mounted to a VM and files recovered as "
+"needed. Alternatively, a template may be created from the snapshot of a root"
+" disk. The user can then boot a VM from this template to effect recovery of "
+"the root disk."
+msgstr "æ两ç§æ¹å¼æ¢å¤å¿«ç
§ãç¨æ·è½å¤ä»å¿«ç
§ä¸å建ä¸ä¸ªå·ãå·å¯ä»¥éå被æè½½å°èææºä¸å¹¶ä¸æ件æ
¹æ®éè¦è¢«å¤åãå¦ä¸ç§æ¹å¼æ¯ï¼æ¨¡æ¿å¯ä»¥ä»ä¸ä¸ªroot ççå¿«ç
§å建ãç¨æ·è½å¤ä»è¿ä¸ªæ¨¡æ¿å¯å¨èææºä»èå®é
ä¸å¤årootçã"
+
+# e9999cd4fd964600a2979344583457d2
+#: ../../storage.rst:834
+msgid "Snapshot Job Throttling"
+msgstr "å¿«ç
§å·¥ä½è°è"
+
+# c31b190ad5a64d5c84c623c66056d37a
+#: ../../storage.rst:836
+msgid ""
+"When a snapshot of a virtual machine is requested, the snapshot job runs on "
+"the same host where the VM is running or, in the case of a stopped VM, the "
+"host where it ran last. If many snapshots are requested for VMs on a single "
+"host, this can lead to problems with too many snapshot jobs overwhelming the"
+" resources of the host."
+msgstr "å½èææºéè¦å¿«ç
§æ¶ï¼VMæå¨ç主æºä¸å°±ä¼è¿è¡å¿«ç
§å·¥ä½ï¼æè
å¨VMæåè¿è¡ç主æºä¸ãå¦æå¨ä¸å°ä¸»æºä¸çVMséè¦å¾å¤å¿«ç
§ï¼é£ä¹è¿ä¼å¯¼è´å¤ªå¤çå¿«ç
§å·¥ä½è¿èå ç¨è¿å¤ç主æºèµæºã"
+
+# b250747385d042f49ad48f81329b10a7
+#: ../../storage.rst:842
+msgid ""
+"To address this situation, the cloud's root administrator can throttle how "
+"many snapshot jobs are executed simultaneously on the hosts in the cloud by "
+"using the global configuration setting "
+"concurrent.snapshots.threshold.perhost. By using this setting, the "
+"administrator can better ensure that snapshot jobs do not time out and "
+"hypervisor hosts do not experience performance issues due to hosts being "
+"overloaded with too many snapshot requests."
+msgstr "é对è¿ç§æ
åµï¼äºç«¯çroot管çåå¯ä»¥å©ç¨å
¨å±é
置设置ä¸çconcurrent.snapshots.threshold.perhostè°èæå¤å°å¿«ç
§å·¥ä½åæ¶å¨ä¸»æºä¸è¿è¡ãåå©è¿ä¸ªè®¾ç½®ï¼å½å¤ªå¤å¿«ç
§è¯·æ±åçæ¶ï¼ç®¡çåæ´å¥½ç确认快ç
§å·¥ä½ä¸ä¼è¶
æ¶å¹¶ä¸hypervisor主æºä¸ä¼ææ§è½é®é¢ã"
+
+# d1dd9aa964bc44e39ace5434f582dcd1
+#: ../../storage.rst:850
+msgid ""
+"Set concurrent.snapshots.threshold.perhost to a value that represents a best"
+" guess about how many snapshot jobs the hypervisor hosts can execute at one "
+"time, given the current resources of the hosts and the number of VMs running"
+" on the hosts. If a given host has more snapshot requests, the additional "
+"requests are placed in a waiting queue. No new snapshot jobs will start "
+"until the number of currently executing snapshot jobs falls below the "
+"configured limit."
+msgstr "ç»concurrent.snapshots.threshold.perhost设置ä¸ä¸ªä½
ç»åç®å主æºèµæºåå¨ä¸»æºä¸è¿è¡çVMsæ°éçæä½³å¼ï¼è¿ä¸ªå¼ä»£è¡¨äºå¨åä¸æ¶å»æå¤å°å¿«ç
§å·¥ä½å¨hypervisor主æºä¸æ§è¡ãå¦æä¸ä¸ªä¸»æºææ¯è¾å¤çå¿«ç
§è¯·æ±ï¼é¢å¤ç请æ±å°±ä¼è¢«æ¾å¨çå¾
éåéãå¨å½åæ§è¡
çå¿«ç
§å·¥ä½æ°éä¸éè³éå¶å¼ä¹å
ï¼æ°çå¿«ç
§å·¥ä½æä¼å¼å§ã"
+
+# 7b49369aa49743b780984cc9c3bd2c52
+#: ../../storage.rst:858
+msgid ""
+"The admin can also set job.expire.minutes to place a maximum on how long a "
+"snapshot request will wait in the queue. If this limit is reached, the "
+"snapshot request fails and returns an error message."
+msgstr "管çåä¹å¯ä»¥éè¿job.expire.minutesç»å¿«ç
§è¯·æ±çå¾
éåçé¿åº¦è®¾ç½®ä¸ä¸ªæ大å¼ãå¦æè¾¾å°äºè¿ä¸ªéå¶ï¼é£ä¹å¿«ç
§è¯·æ±ä¼å¤±è´¥å¹¶ä¸è¿åä¸ä¸ªé误æ¶æ¯ã"
+
+# fb6fcac386824780ac7f09de0d50a555
+#: ../../storage.rst:863
+msgid "VMware Volume Snapshot Performance"
+msgstr "VMwareå·å¿«ç
§æ§è½"
+
+# 80cd717eab4a487f89e5d530af50dcdf
+#: ../../storage.rst:865
+msgid ""
+"When you take a snapshot of a data or root volume on VMware, CloudStack uses"
+" an efficient storage technique to improve performance."
+msgstr "å½ä½ 为VMwareä¸çæ°æ®å·ærootå·åå¿«ç
§æ¶ï¼CloudStack使ç¨ä¸ç§é«æççåå¨ææ¯æ¥æé«æ§è½ã"
+
+# 0feb29ffc87942eaa740795e36db3801
+#: ../../storage.rst:868
+msgid ""
+"A snapshot is not immediately exported from vCenter to a mounted NFS share "
+"and packaged into an OVA file format. This operation would consume time and "
+"resources. Instead, the original file formats (e.g., VMDK) provided by "
+"vCenter are retained. An OVA file will only be created as needed, on demand."
+" To generate the OVA, CloudStack uses information in a properties file "
+"(\\*.ova.meta) which it stored along with the original snapshot data."
+msgstr "å¿«ç
§ä¸ä¼ç«å³ä»vCenter导åºOVAæ ¼å¼æ件å°æè½½çNFSå
±äº«ä¸ãè¿ä¸ªæä½ä¼æ¶èæ¶é´åèµæºãç¸åçï¼ç±vCenteræä¾çåå§æ件æ
¼å¼(æ¯å¦VMDK)被ä¿çãOVAæ件åªæå¨éè¦çæ¶å被å建ãCloudStack使ç¨ä¸åå§å¿«ç
§æ°æ®åå¨å¨ä¸èµ·çå±æ§æ件(\\*.ova.meta)ä¸çä¿¡æ¯æ¥çæOVAã"
+
+# de8e361b5b984d8fac0ef05d6f5e03e2
+#: ../../storage.rst:877
+msgid ""
+"For upgrading customers: This process applies only to newly created "
+"snapshots after upgrade to CloudStack 4.2. Snapshots that have already been "
+"taken and stored in OVA format will continue to exist in that format, and "
+"will continue to work as expected."
+msgstr
"对äºæ§çæ¬å级ç客æ·ï¼è¿ä¸ªè¿ç¨åªéç¨äºå¨å级å°CloudStack
4.2ä¹åæ°å建çå¿«ç
§ãå·²ç»åè¿å¿«ç
§å¹¶ä¸ä½¿ç¨OVAæ
¼å¼åå¨çå°ç»§ç»ä½¿ç¨å·²æçæ ¼å¼ï¼å¹¶ä¸ä¹è½æ£å¸¸å·¥ä½ã"