Dear Hieu,
thanks for your reply. I attach the config file, my moses.ini (I think
this is the one you want to get), and a few lines of our input file,
already preprocessed. If you want the RAW lines I can also send them to
you.
I don't know if this will be a similar issue, but I tried the same
strategy using the forced translations (<np
translation="German">Deutsch</np>), and this morning I have observed the
same, some tags are suddenly appearing in the translation.
Thank you very much for your support!
Carla
El 19.05.2015 09:13, Hieu Hoang escribió:
what is the exact command you used to decode? Can you please provide
the moses.ini file and a few lines of your input data for us to look
at.
Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu [3]
On 18 May 2015 at 15:35, Carla Parra <[email protected]>
wrote:
Dear all,
we just finished some experiments using placeables, and we have
observed
several issues that may be worth sharing. I don't know if someone
has
experienced the same, or you were already aware of this, but just
in
case:
(1) Special characters must be scaped in the "entity" value field.
Otherwise, the cause XML parsing errors at tuning (not at training,
though!), and wrong values are retrieved from the tags (e.g. we had
text
with additional quotation marks, and this caused that the
translation
stopped at the first quotation mark, not yielding the complete
"entity"
value we had encoded).
(2) <ne> tags are added to sentences as if they were computed as
tokens
during training. (i.e. not ignored, as they just contain the
placeables).
As an example, the English sentence "Allow simple password", is
translated as "Permitir simple contraseña <ne translation="@tag@"
entity="</1>">@tag@</ne> ."
While the first issue is our fault, we do not know what causes the
second one. We have followed the instructions at the MOSES advanced
features site and thus specified "extract-settings = "--Placeholder
@tag@"" in training and "-placeholder-factor 1 -xml-input
exclusive" in
the decoder and evaluation. Has anyone experienced the same thing
and/or
know how to solve this issue?
Thank you very much. Best regards,
Carla
--
Carla Parra Escartín
Marie Curie Experienced Researcher - EXPERT ITN
http://expert-itn.eu/ [1]
Hermes Traducciones
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support [2]
Links:
------
[1] http://expert-itn.eu/
[2] http://mailman.mit.edu/mailman/listinfo/moses-support
[3] http://www.hoang.co.uk/hieu
--
Carla Parra Escartín
Marie Curie Experienced Researcher - EXPERT ITN
http://expert-itn.eu/
Hermes Traducciones
################################################
### CONFIGURATION FILE FOR AN SMT EXPERIMENT ###
################################################
[GENERAL]
### directory in which experiment is run
#
working-dir = /home/hermesta/Exps/KES_newDev_placeholders3
# specification of the language pair
input-extension = en
output-extension = es
pair-extension = en-es
### directories that contain tools and data
#
# moses
moses-src-dir = /home/hermesta/mosesdecoder
#
# moses binaries
moses-bin-dir = $moses-src-dir/bin
#
# moses scripts
moses-script-dir = $moses-src-dir/scripts
#
# directory where GIZA++/MGIZA programs resides
external-bin-dir = /home/hermesta/tools/fast_align
#
# srilm
#srilm-dir = $moses-src-dir/srilm/bin/i686
#
# irstlm
#irstlm-dir = $moses-src-dir/irstlm/bin
#
# randlm
#randlm-dir = $moses-src-dir/randlm/bin
#
# data
data = $working-dir/data
### basic tools
#
# moses decoder
decoder = $moses-bin-dir/moses
# conversion of rule table into binary on-disk format
ttable-binarizer = "$moses-bin-dir/CreateOnDiskPt 1 1 4 100 2"
# tokenizers - comment out if all your data is already tokenized
input-tokenizer = "$moses-script-dir/tokenizer/tokenizer.perl -protected $moses-script-dir/tokenizer/patterns -no-escape -l $input-extension"
output-tokenizer = "$moses-script-dir/tokenizer/tokenizer.perl -protected $moses-script-dir/tokenizer/patterns -no-escape -l $output-extension"
# truecasers - comment out if you do not use the truecaser
#input-truecaser = $moses-script-dir/recaser/truecase.perl
#output-truecaser = $moses-script-dir/recaser/truecase.perl
detruecaser = $moses-script-dir/recaser/detruecase.perl
# lowercaser - comment out if you use truecasing
#input-lowercaser = $moses-script-dir/tokenizer/lowercase.perl
#output-lowercaser = $moses-script-dir/tokenizer/lowercase.perl
### generic parallelizer for cluster and multi-core machines
# you may specify a script that allows the parallel execution
# parallizable steps (see meta file). you also need specify
# the number of jobs (cluster) or cores (multicore)
#
#generic-parallelizer = $moses-script-dir/ems/support/generic-parallelizer.perl
generic-parallelizer = $moses-script-dir/ems/support/generic-multicore-parallelizer.perl
### cluster settings (if run on a cluster machine)
# number of jobs to be submitted in parallel
#
#jobs = 10
# arguments to qsub when scheduling a job
#qsub-settings = ""
# project for priviledges and usage accounting
#qsub-project = iccs_smt
# memory and time
#qsub-memory = 4
#qsub-hours = 48
### multi-core settings
# when the generic parallelizer is used, the number of cores
# specified here
cores = 8
#################################################################
# PARALLEL CORPUS PREPARATION:
# create a tokenized, sentence-aligned corpus, ready for training
[CORPUS]
### long sentences are filtered out, since they slow down GIZA++
# and are a less reliable source of data. set here the maximum
# length of a sentence
#
max-sentence-length = 80
[CORPUS:KES]
### command to run to get raw corpus files
#
# get-corpus-script =
### raw corpus files (untokenized, but sentence aligned)
#
raw-stem = $data/training/KES10.train.preproc
### tokenized corpus files (may contain long sentences)
#
#tokenized-stem = $data/training/KES10.train.preproc.tok
### if sentence filtering should be skipped,
# point to the clean training data
#
#clean-stem =
### if corpus preparation should be skipped,
# point to the prepared training data
#
#lowercased-stem =
#################################################################
# LANGUAGE MODEL TRAINING
[LM]
### tool to be used for language model training
# kenlm training
lm-training = "$moses-script-dir/ems/support/lmplz-wrapper.perl -bin $moses-bin-dir/lmplz"
settings = "--prune '0 0 1' -T $working-dir/lm -S 20%"
# srilm
#lm-training = $srilm-dir/ngram-count
#settings = "-interpolate -kndiscount -unk"
# irstlm training
# msb = modified kneser ney; p=0 no singleton pruning
#lm-training = "$moses-script-dir/generic/trainlm-irst2.perl -cores $cores -irst-dir $irstlm-dir -temp-dir $working-dir/tmp"
#settings = "-s msb -p 0"
# order of the language model
order = 5
### tool to be used for training randomized language model from scratch
# (more commonly, a SRILM is trained)
#
#rlm-training = "$randlm-dir/buildlm -falsepos 8 -values 8"
### script to use for binary table format for irstlm or kenlm
# (default: no binarization)
# irstlm
#lm-binarizer = $irstlm-dir/compile-lm
# kenlm, also set type to 8
lm-binarizer = $moses-bin-dir/build_binary
type = 8
### script to create quantized language model format (irstlm)
# (default: no quantization)
#
#lm-quantizer = $irstlm-dir/quantize-lm
### script to use for converting into randomized table format
# (default: no randomization)
#
#lm-randomizer = "$randlm-dir/buildlm -falsepos 8 -values 8"
### each language model to be used has its own section here
[LM:KES]
### command to run to get raw corpus files
#
#get-corpus-script = ""
### raw corpus (untokenized)
#
raw-corpus = $data/training/KES10.train.preproc.$output-extension
### tokenized corpus files (may contain long sentences)
#
#tokenized-corpus = $data/training/KES10.train.preproc.tok.$output-extension
### if corpus preparation should be skipped,
# point to the prepared language model
#
#lm =
#################################################################
# INTERPOLATING LANGUAGE MODELS
[INTERPOLATED-LM] IGNORE
# if multiple language models are used, these may be combined
# by optimizing perplexity on a tuning set
# see, for instance [Koehn and Schwenk, IJCNLP 2008]
### script to interpolate language models
# if commented out, no interpolation is performed
#
# script = $moses-script-dir/ems/support/interpolate-lm.perl
### tuning set
# you may use the same set that is used for mert tuning (reference set)
#
#tuning-sgm =
#raw-tuning =
#tokenized-tuning =
#factored-tuning =
#lowercased-tuning =
#split-tuning =
### group language models for hierarchical interpolation
# (flat interpolation is limited to 10 language models)
#group = "first,second fourth,fifth"
### script to use for binary table format for irstlm or kenlm
# (default: no binarization)
# irstlm
#lm-binarizer = $irstlm-dir/compile-lm
# kenlm, also set type to 8
lm-binarizer = $moses-bin-dir/build_binary
type = 8
### script to create quantized language model format (irstlm)
# (default: no quantization)
#
#lm-quantizer = $irstlm-dir/quantize-lm
### script to use for converting into randomized table format
# (default: no randomization)
#
#lm-randomizer = "$randlm-dir/buildlm -falsepos 8 -values 8"
#################################################################
# MODIFIED MOORE LEWIS FILTERING
[MML] IGNORE
### specifications for language models to be trained
#
#lm-training = $srilm-dir/ngram-count
#lm-settings = "-interpolate -kndiscount -unk"
#lm-binarizer = $moses-src-dir/bin/build_binary
#lm-query = $moses-src-dir/bin/query
#order = 5
### in-/out-of-domain source/target corpora to train the 4 language model
#
# in-domain: point either to a parallel corpus
#outdomain-stem = [CORPUS:toy:clean-split-stem]
# ... or to two separate monolingual corpora
#indomain-target = [LM:toy:lowercased-corpus]
#raw-indomain-source = $toy-data/nc-5k.$input-extension
# point to out-of-domain parallel corpus
#outdomain-stem = [CORPUS:giga:clean-split-stem]
# settings: number of lines sampled from the corpora to train each language model on
# (if used at all, should be small as a percentage of corpus)
#settings = "--line-count 100000"
#################################################################
# TRANSLATION MODEL TRAINING
[TRAINING]
### training script to be used: either a legacy script or
# current moses training script (default)
#
script = $moses-script-dir/training/train-model.perl
extract-settings = "--Placeholders @tag@"
### general options
# these are options that are passed on to train-model.perl, for instance
# * "-mgiza -mgiza-cpus 8" to use mgiza instead of giza
# * "-sort-buffer-size 8G -sort-compress gzip" to reduce on-disk sorting
# * "-sort-parallel 8 -cores 8" to speed up phrase table building
# * "-parallel" for parallel execution of mkcls and giza
#
#training-options = "-mgiza -mgiza-cpus 8"
### factored training: specify here which factors used
# if none specified, single factor training is assumed
# (one translation step, surface to surface)
#
#input-factors = word lemma pos morph
#output-factors = word lemma pos
#alignment-factors = "word -> word"
#translation-factors = "word -> word"
#reordering-factors = "word -> word"
#generation-factors = "word -> pos"
#decoding-steps = "t0, g0"
### parallelization of data preparation step
# the two directions of the data preparation can be run in parallel
# comment out if not needed
#
parallel = yes
### pre-computation for giza++
# giza++ has a more efficient data structure that needs to be
# initialized with snt2cooc. if run in parallel, this may reduces
# memory requirements. set here the number of parts
#
#run-giza-in-parts = 5
### symmetrization method to obtain word alignments from giza output
# (commonly used: grow-diag-final-and)
#
alignment-symmetrization-method = grow-diag-final-and
### use of Chris Dyer's fast align for word alignment
#
fast-align-settings = "-d -o -v"
### use of berkeley aligner for word alignment
#
#use-berkeley = true
#alignment-symmetrization-method = berkeley
#berkeley-train = $moses-script-dir/ems/support/berkeley-train.sh
#berkeley-process = $moses-script-dir/ems/support/berkeley-process.sh
#berkeley-jar = /your/path/to/berkeleyaligner-1.1/berkeleyaligner.jar
#berkeley-java-options = "-server -mx30000m -ea"
#berkeley-training-options = "-Main.iters 5 5 -EMWordAligner.numThreads 8"
#berkeley-process-options = "-EMWordAligner.numThreads 8"
#berkeley-posterior = 0.5
### use of baseline alignment model (incremental training)
#
#baseline = 68
#baseline-alignment-model = "$working-dir/training/prepared.$baseline/$input-extension.vcb \
# $working-dir/training/prepared.$baseline/$output-extension.vcb \
# $working-dir/training/giza.$baseline/${output-extension}-$input-extension.cooc \
# $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.cooc \
# $working-dir/training/giza.$baseline/${output-extension}-$input-extension.thmm.5 \
# $working-dir/training/giza.$baseline/${output-extension}-$input-extension.hhmm.5 \
# $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.thmm.5 \
# $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.hhmm.5"
### if word alignment should be skipped,
# point to word alignment files
#
#word-alignment = $working-dir/model/aligned.1
### filtering some corpora with modified Moore-Lewis
# specify corpora to be filtered and ratio to be kept, either before or after word alignment
#mml-filter-corpora = toy
#mml-before-wa = "-proportion 0.9"
#mml-after-wa = "-proportion 0.9"
### build memory mapped suffix array phrase table
# (binarizing the reordering table is a good idea, since filtering makes little sense)
#mmsapt = "num-features=9 pfwd=g+ pbwd=g+ smooth=0 sample=1000 workers=1"
#binarize-all = $moses-script-dir/training/binarize-model.perl
### create a bilingual concordancer for the model
#
#biconcor = $moses-bin-dir/biconcor
## Operation Sequence Model (OSM)
# Durrani, Schmid and Fraser. (2011):
# "A Joint Sequence Translation Model with Integrated Reordering"
# compile Moses with --max-kenlm-order=9 if higher order is required
#
#operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = ""
#
# if OSM training should be skipped, point to OSM Model
#osm-model =
### unsupervised transliteration module
# Durrani, Sajjad, Hoang and Koehn (EACL, 2014).
# "Integrating an Unsupervised Transliteration Model
# into Statistical Machine Translation."
#
#transliteration-module = "yes"
#post-decoding-transliteration = "yes"
### lexicalized reordering: specify orientation type
# (default: only distance-based reordering model)
#
lexicalized-reordering = msd-bidirectional-fe
### hierarchical rule set
#
#hierarchical-rule-set = true
### settings for rule extraction
#
#extract-settings = ""
max-phrase-length = 5
### add extracted phrases from baseline model
#
#baseline-extract = $working-dir/model/extract.$baseline
#
# requires aligned parallel corpus for re-estimating lexical translation probabilities
#baseline-corpus = $working-dir/training/corpus.$baseline
#baseline-alignment = $working-dir/model/aligned.$baseline.$alignment-symmetrization-method
### unknown word labels (target syntax only)
# enables use of unknown word labels during decoding
# label file is generated during rule extraction
#
#use-unknown-word-labels = true
### if phrase extraction should be skipped,
# point to stem for extract files
#
# extracted-phrases =
### settings for rule scoring
#
score-settings = "--GoodTuring --MinScore 2:0.0001"
### include word alignment in phrase table
#
#include-word-alignment-in-rules = yes
### sparse lexical features
#
#sparse-features = "target-word-insertion top 50, source-word-deletion top 50, word-translation top 50 50, phrase-length"
### domain adaptation settings
# options: sparse, any of: indicator, subset, ratio
#domain-features = "subset"
### if phrase table training should be skipped,
# point to phrase translation table
#
# phrase-translation-table =
### if reordering table training should be skipped,
# point to reordering table
#
# reordering-table =
### filtering the phrase table based on significance tests
# Johnson, Martin, Foster and Kuhn. (2007): "Improving Translation Quality by Discarding Most of the Phrasetable"
# options: -n number of translations; -l 'a+e', 'a-e', or a positive real value -log prob threshold
#salm-index = /path/to/project/salm/Bin/Linux/Index/IndexSA.O64
#sigtest-filter = "-l a+e -n 50"
### if training should be skipped,
# point to a configuration file that contains
# pointers to all relevant model files
#
#config-with-reused-weights =
#####################################################
### TUNING: finding good weights for model components
[TUNING]
### instead of tuning with this setting, old weights may be recycled
# specify here an old configuration file with matching weights
#
#weight-config = $data/weight.ini
### tuning script to be used
#
tuning-script = $moses-script-dir/training/mert-moses.pl
tuning-settings = "-mertdir $moses-bin-dir"
### specify the corpus used for tuning
# it should contain 1000s of sentences
#
#input-sgm =
raw-input = $data/dev/KES10.dev.preproc.en
#tokenized-input = $data/dev/KES10.dev.preproc.tok.en
#factorized-input =
#input =
#
#reference-sgm =
raw-reference = $data/dev/KES10.dev.preproc.es
#tokenized-reference = $data/dev/KES10.dev.preproc.tok.es
#factorized-reference =
#reference =
### size of n-best list used (typically 100)
#
nbest = 100
### ranges for weights for random initialization
# if not specified, the tuning script will use generic ranges
# it is not clear, if this matters
#
# lambda =
### additional flags for the filter script
#
filter-settings = ""
### additional flags for the decoder
#
decoder-settings = "-placeholder-factor 1 -xml-input exclusive"
### if tuning should be skipped, specify this here
# and also point to a configuration file that contains
# pointers to all relevant model files
#
#config-with-reused-weights =
#########################################################
## RECASER: restore case, this part only trains the model
[RECASING] IGNORE
### training data
# raw input needs to be still tokenized,
# also also tokenized input may be specified
#
#tokenized = [LM:europarl:tokenized-corpus]
### additinal settings
#
recasing-settings = ""
#lm-training = $srilm-dir/ngram-count
decoder = $moses-bin-dir/moses
# already a trained recaser? point to config file
#recase-config =
#######################################################
## TRUECASER: train model to truecase corpora and input
[TRUECASER]
### script to train truecaser models
#
trainer = $moses-script-dir/recaser/train-truecaser.perl
### training data
# data on which truecaser is trained
# if no training data is specified, parallel corpus is used
#
# raw-stem =
# tokenized-stem =
### trained model
#
# truecase-model =
######################################################################
## EVALUATION: translating a test set using the tuned system and score it
[EVALUATION]
### additional flags for the filter script
#
#filter-settings = ""
### additional decoder settings
# switches for the Moses decoder
# common choices:
# "-threads N" for multi-threading
# "-mbr" for MBR decoding
# "-drop-unknown" for dropping unknown source words
# "-search-algorithm 1 -cube-pruning-pop-limit 5000 -s 5000" for cube pruning
#
decoder-settings = "-search-algorithm 1 -cube-pruning-pop-limit 5000 -s 5000 -placeholder-factor 1 1 -xml-input exclusive"
### specify size of n-best list, if produced
#
#nbest = 100
### multiple reference translations
#
#multiref = yes
### prepare system output for scoring
# this may include detokenization and wrapping output in sgm
# (needed for nist-bleu, ter, meteor)
#
detokenizer = "$moses-script-dir/tokenizer/detokenizer.perl -l $output-extension"
#recaser = $moses-script-dir/recaser/recase.perl
#wrapping-script = "$moses-script-dir/ems/support/wrap-xml.perl $output-extension"
#output-sgm =
### BLEU
#
#nist-bleu = $moses-script-dir/generic/mteval-v13a.pl
#nist-bleu-c = "$moses-script-dir/generic/mteval-v13a.pl -c"
multi-bleu = "$moses-script-dir/generic/multi-bleu.perl -lc"
#multi-bleu-c = $moses-script-dir/generic/multi-bleu.perl
#ibm-bleu =
### TER: translation error rate (BBN metric) based on edit distance
# not yet integrated
#
# ter =
### METEOR: gives credit to stem / worknet synonym matches
# not yet integrated
#
# meteor =
### Analysis: carry out various forms of analysis on the output
#
analysis = $moses-script-dir/ems/support/analysis.perl
#
# also report on input coverage
analyze-coverage = yes
#
# also report on phrase mappings used
report-segmentation = yes
#
# report precision of translations for each input word, broken down by
# count of input word in corpus and model
#report-precision-by-coverage = yes
#
# further precision breakdown by factor
#precision-by-coverage-factor = pos
#
# visualization of the search graph in tree-based models
#analyze-search-graph = yes
[EVALUATION:test_RUBIO]
### input data
#
#input-sgm = $data/test-src.$input-extension.sgm
raw-input = $data/test/Rubio.preproc.$input-extension
#tokenized-input = $data/test/Rubio.preproc.tok.$input-extension
# factorized-input =
# input =
### reference data
#
#reference-sgm = $data/test/test-ref.$output-extension.sgm
raw-reference = $data/test/Rubio.$output-extension
#tokenized-reference = $data/test/Rubio.preproc.tok.$output-extension
# reference =
[EVALUATION:test_TERE]
### input data
#
#input-sgm = $data/test-src.$input-extension.sgm
raw-input = $data/test/Tere.preproc.$input-extension
#tokenized-input = $data/test/Tere.preproc.tok.$input-extension
# factorized-input =
# input =
### reference data
#
#reference-sgm = $data/test/test-ref.$output-extension.sgm
raw-reference = $data/test/Tere.$output-extension
#tokenized-reference = $data/test/Tere.preproc.tok.$output-extension
# reference =
### analysis settings
# may contain any of the general evaluation analysis settings
# specific setting: base coverage statistics on earlier run
#
#precision-by-coverage-base = $working-dir/evaluation/test.analysis.5
### wrapping frame
# for nist-bleu and other scoring scripts, the output needs to be wrapped
# in sgm markup (typically like the input sgm)
#
#wrapping-frame = $input-sgm
##########################################
### REPORTING: summarize evaluation scores
[REPORTING]
### currently no parameters for reporting section
# MERT optimized configuration
# decoder /home/hermesta/mosesdecoder/bin/moses
# BLEU 0.625658 on dev
/home/hermesta/Exps/KES_newDev_placeholders3/tuning/input.tok.1
# We were before running iteration 4
# finished lun may 18 15:43:37 CEST 2015
### MOSES CONFIG FILE ###
#########################
# input factors
[input-factors]
0
# mapping steps
[mapping]
0 T 0
[distortion-limit]
6
# feature functions
[feature]
UnknownWordPenalty
WordPenalty
PhrasePenalty
PhraseDictionaryOnDisk name=TranslationModel0 num-features=4
path=/home/hermesta/Exps/KES_newDev_placeholders3/tuning/filtered.1/phrase-table.0-0.1.1.bin
input-factor=0 output-factor=0
LexicalReordering name=LexicalReordering0 num-features=6
type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0
path=/home/hermesta/Exps/KES_newDev_placeholders3/tuning/filtered.1/reordering-table.1.wbe-msd-bidirectional-fe
Distortion
KENLM lazyken=0 name=LM0 factor=0
path=/home/hermesta/Exps/KES_newDev_placeholders3/lm/KES.binlm.1 order=5
# dense weights for feature functions
[xml-input]
exclusive
[placeholder-factor]
1
[v]
0
[weight]
LexicalReordering0= 0.0232753 0.0800222 0.114982 0.1123 -0.00290165 0.145305
Distortion0= 0.00414399
LM0= 0.0866751
WordPenalty0= -0.221848
PhrasePenalty0= -0.0347004
TranslationModel0= 0.0424729 0.0339108 0.080352 0.0171106
UnknownWordPenalty0= 1
<ne translation="@tag@" entity="<x id="27"
xid="b19ee0f0-6852-4363-b17b-119f87fa18c8"/>">@tag@</ne> Print
The " lock " attribute <ne translation="@tag@" entity="<g
id="50"
xid="d379be9c-d5f5-4302-9b2f-66c982c7990b">">@tag@</ne> shows
whether or not the settings can be edited in the local application settings via
the Administration Console . <ne translation="@tag@"
entity="</g>">@tag@</ne>
The <ne translation="@tag@" entity="<g id="56">">@tag@</ne>
Password settings <ne translation="@tag@" entity="</g>">@tag@</ne>
section lets you configure password strength requirements and the rules for
entering the password for protecting the mobile device .
Apply settings on device
Applying the password strength requirements defined in the <ne
translation="@tag@" entity="<g id="66">">@tag@</ne> Password
<ne translation="@tag@" entity="</g>">@tag@</ne> section on the mobile
device .
If the password does not meet the requirements , the user has to change it in
accordance with the settings defined by the administrator .
If this check box is selected , the management device checks the password for
compliance with the strength requirements defined in the policy after the
device is synchronized with Administration Server .
If this check box is cleared , the management device does not check the
password for strength after synchronization .
Allow simple password
Use of a simple password for protecting a mobile device .
A <ne translation="@tag@" entity="<g id="84">">@tag@</ne>
simple password <ne translation="@tag@" entity="</g>">@tag@</ne> is a
password that contains successive or repetitive characters , such as "
abcd " or " <ne translation="@tag@" entity="2222">@tag@</ne> " .
If the check box is selected , the user can use a simple password to protect
the mobile device .
If the check box is cleared , the user cannot use a simple password to protect
the mobile device .
Prompt for alphanumeric value
If the check box is selected , the user is required to use an alphanumeric
password to protect the mobile device .
If the check box is cleared , the user is not required to use an alphanumeric
password to protect the mobile device .
Specify the minimum password length in characters .
Minimum number of special characters
Selecting the minimum number of special symbols ( such as " $ " ,
" & " , " ! " ) that can be included in the password
for protection of the iOS MDM mobile device .
Maximum password lifetime
A period of time in days during which the password remains valid .
The default value is <ne translation="@tag@" entity="0">@tag@</ne> .
If the mobile device remains idle during this time , it switches to sleep mode .
On different mobile devices , the actual time of the device ' s automatic
locking may differ from the value that you have specified :
On iPhone devices : if you have set Auto-Lock in <ne translation="@tag@"
entity="10">@tag@</ne> or <ne translation="@tag@" entity="15">@tag@</ne>
minutes , the device will be locked in <ne translation="@tag@"
entity="5">@tag@</ne> minutes .
On iPad devices : if you have set Auto-Lock in <ne translation="@tag@"
entity="1">@tag@</ne> â <ne translation="@tag@" entity="4">@tag@</ne> minutes
, the device will be locked in <ne translation="@tag@" entity="2">@tag@</ne>
minutes .
For other values the actual time of the device ' s automatic locking
matches the specified time .
Password history
For example , if the value is set to <ne translation="@tag@"
entity="3">@tag@</ne> , the new password cannot match any one of the last three
passwords used .
If passwords match , the new password is rejected .
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support