Yingyi Bu has uploaded a new change for review.
https://asterix-gerrit.ics.uci.edu/1649
Change subject: Fix spaces in docs.
......................................................................
Fix spaces in docs.
Change-Id: I22778fd5f89353850df775f60ac02c5e5d071686
---
M asterixdb/asterix-doc/src/main/installation/ansible.md
M asterixdb/asterix-doc/src/main/installation/aws.md
2 files changed, 79 insertions(+), 94 deletions(-)
git pull ssh://asterix-gerrit.ics.uci.edu:29418/asterixdb
refs/changes/49/1649/1
diff --git a/asterixdb/asterix-doc/src/main/installation/ansible.md
b/asterixdb/asterix-doc/src/main/installation/ansible.md
index 056871f..5b2a6a5 100644
--- a/asterixdb/asterix-doc/src/main/installation/ansible.md
+++ b/asterixdb/asterix-doc/src/main/installation/ansible.md
@@ -29,36 +29,36 @@
CentOS
- $ sudo yum install python-pip
+ $ sudo yum install python-pip
Ubuntu
- $ sudo apt-get install python-pip
+ $ sudo apt-get install python-pip
macOS
- $ brew install pip
+ $ brew install pip
* Install Ansible, boto, and boto3 on your client machine:
- $ pip install ansible
- $ pip install boto
- $ pip install boto3
+ $ pip install ansible
+ $ pip install boto
+ $ pip install boto3
Note that you might need `sudo` depending on your system configuration.
**Make sure that the version of Ansible is no less than 2.2.1.0**:
- $ ansible --version
- ansible 2.2.1.0
+ $ ansible --version
+ ansible 2.2.1.0
* Download the AsterixDB distribution package, unzip it, and navigate to
`opt/ansible/`
- $ cd opt/ansible
+ $ cd opt/ansible
The following files and directories are in the directory `opt/ansible`:
- README bin conf yaml
+ README bin conf yaml
`bin` contains scripts that deploy, start, stop and erase a multi-node
AsterixDB cluster, according to
the configuration specified in files under `conf`, and `yaml` contains
internal Ansible scripts that the shell
@@ -73,42 +73,33 @@
The following example configures a cluster with two slave nodes
(172.0.1.11 and 172.0.1.12) and
one master node (172.0.1.10).
- [cc]
- 172.0.1.10
+ [cc]
+ 172.0.1.10
- [ncs]
- 172.0.1.11
- 172.0.1.12
+ [ncs]
+ 172.0.1.11
+ 172.0.1.12
**Configure passwordless ssh from your current client that runs the
scripts to all nodes listed
in `conf/inventory` as well as `localhost`.**
If the ssh user account for target machines is different from your
current username, please uncomment
and edit the following two lines:
- ;[all:vars]
- ;ansible_ssh_user=<fill with your ssh account username>
+ ;[all:vars]
+ ;ansible_ssh_user=<fill with your ssh account username>
If you want to specify advanced Ansible builtin variables, please refer
to the
[Ansible
documentation](http://docs.ansible.com/ansible/intro_inventory.html).
- * **Remote working directories**. Edit `conf/instance_settings.yml` to
change the remote binary directories
- when necessary. By default, the binary directory will be under the home
directory (as the value of
- Ansible builtin variable ansible_env.HOME) of the ssh user account on
each node.
-
- # The name of the product being used.
- product: asterixdb
-
- # The parent directory for the working directory.
- basedir: "{{ ansible_env.HOME }}"
-
- # The working directory.
- binarydir: "{{ basedir }}/{{ product }}"
+ * **Remote working directories**. Edit `conf/instance_settings.yml` to
change the remote binary directory
+ (the variable "binarydir") when necessary. By default, the binary
directory will be under the home directory
+ (as the value of Ansible builtin variable ansible_env.HOME) of the ssh
user account on each node.
## <a id="lifecycle">Cluster Lifecycle Management</a>
* Deploy the binary to all nodes:
- $ bin/deploy.sh
+ $ bin/deploy.sh
* Every time before starting the AsterixDB cluster, you can edit the
instance configuration file
`conf/instance/cc.conf`, except that IP addresses/DNS names are generated
and cannot
@@ -116,16 +107,16 @@
* Launch your AsterixDB cluster:
- $ bin/start.sh
+ $ bin/start.sh
Now you can use the multi-node AsterixDB cluster by opening the master
node
listed in `conf/inventory` at port `19001` (which can be customized in
`conf/instance/cc.conf`)
in your browser.
- * If you want to stop the the multi-node AsterixDB cluster, run the
following script:
+ * If you want to stop the the multi-node AsterixDB cluster, run the
following script:
- $ bin/stop.sh
+ $ bin/stop.sh
- * If you want to remove the binary on all nodes, run the following script:
+ * If you want to remove the binary on all nodes, run the following script:
- $ bin/erase.sh
+ $ bin/erase.sh
diff --git a/asterixdb/asterix-doc/src/main/installation/aws.md
b/asterixdb/asterix-doc/src/main/installation/aws.md
index 8b6602b..920b93c 100644
--- a/asterixdb/asterix-doc/src/main/installation/aws.md
+++ b/asterixdb/asterix-doc/src/main/installation/aws.md
@@ -35,36 +35,36 @@
CentOS
- $ sudo yum install python-pip
+ $ sudo yum install python-pip
Ubuntu
- $ sudo apt-get install python-pip
+ $ sudo apt-get install python-pip
macOS
- $ brew install pip
+ $ brew install pip
* Install Ansible, boto, and boto3 on your client machine:
- $ pip install ansible
- $ pip install boto
- $ pip install boto3
+ $ pip install ansible
+ $ pip install boto
+ $ pip install boto3
Note that you might need `sudo` depending on your system configuration.
**Make sure that the version of Ansible is no less than 2.2.1.0**:
- $ ansible --version
- ansible 2.2.1.0
+ $ ansible --version
+ ansible 2.2.1.0
* Download the AsterixDB distribution package, unzip it, navigate to
`opt/aws/`
- $ cd opt/aws
+ $ cd opt/aws
The following files and directories are in the directory `opt/aws`:
- README bin conf yaml
+ README bin conf yaml
`bin` contains scripts that start and terminate an AWS-based cluster
instance, according to the configuration
specified in files under `conf`, and `yaml` contains internal Ansible
scripts that the shell scripts in `bin` use.
@@ -86,85 +86,79 @@
* Configure your ssh setting by editing `~/.ssh/config` and adding the
following entry:
- Host *.amazonaws.com
+ Host *.amazonaws.com
IdentityFile <path_of_private_key>
Note that \<path_of_private_key\> should be replaced by the path to the
file that stores the private key for the
key pair that you uploaded to AWS and used in `conf/aws_settings`. For
example:
- Host *.amazonaws.com
+ Host *.amazonaws.com
IdentityFile ~/.ssh/id_rsa
### <a id="config">Cluster Configuration</a>
- * **AWS settings**. Edit `conf/instance_settings.yml`. The meaning of each
parameter is listed as follows:
+ * **AWS settings**. Edit `conf/instance_settings.yml`. The meaning of each
parameter is listed as follows:
- # The OS image id for ec2 instances.
- image: ami-76fa4116
+ # The OS image id for ec2 instances.
+ image: ami-76fa4116
- # The data center region for ec2 instances.
- region: us-west-2
+ # The data center region for ec2 instances.
+ region: us-west-2
- # The tag for each ec2 machine. Use different tags for isolation.
- tag: scale_test
+ # The tag for each ec2 machine. Use different tags for isolation.
+ tag: scale_test
- # The name of a security group that appears in your AWS console.
- group: default
+ # The name of a security group that appears in your AWS console.
+ group: default
- # The name of a key pair that appears in your AWS console.
- keypair: <to be filled>
+ # The name of a key pair that appears in your AWS console.
+ keypair: <to be filled>
- # The AWS access key id for your IAM user.
- access_key_id: <to be filled>
+ # The AWS access key id for your IAM user.
+ access_key_id: <to be filled>
- # The AWS secret key for your IAM user.
- secret_access_key: <to be filled>
+ # The AWS secret key for your IAM user.
+ secret_access_key: <to be filled>
- # The AWS instance type. A full list of available types are listed at:
- # https://aws.amazon.com/ec2/instance-types/
- instance_type: t2.micro
+ # The AWS instance type. A full list of available types are listed at:
+ # https://aws.amazon.com/ec2/instance-types/
+ instance_type: t2.micro
- # The number of ec2 instances that construct a cluster.
- count: 3
+ # The number of ec2 instances that construct a cluster.
+ count: 3
- # The user name.
- user: ec2-user
+ # The user name.
+ user: ec2-user
- # Whether to reuse one slave machine to host the master process.
- cc_on_nc: false
+ # Whether to reuse one slave machine to host the master process.
+ cc_on_nc: false
**As described in [prerequisites](#Prerequisites), the following
parameters must be customized:**
- # The tag for each ec2 machine. Use different tags for isolation.
- tag: scale_test
+ # The tag for each ec2 machine. Use different tags for isolation.
+ tag: scale_test
- # The name of a security group that appears in your AWS console.
- group: default
+ # The name of a security group that appears in your AWS console.
+ group: default
- # The name of a key pair that appears in your AWS console.
- keypair: <to be filled>
+ # The name of a key pair that appears in your AWS console.
+ keypair: <to be filled>
- # The AWS access key id for your IAM user.
- access_key_id: <to be filled>
+ # The AWS access key id for your IAM user.
+ access_key_id: <to be filled>
- # The AWS secrety key for your IAM user.
- secret_access_key: <to be filled>
+ # The AWS secrety key for your IAM user.
+ secret_access_key: <to be filled>
- * **Remote working directories**. Edit `conf/instance_settings.yml` to
change the instance binary directories
- when necessary. By default, the binary directory will be under the home
directory (as the value of
- Ansible builtin variable ansible_env.HOME) of the ssh user account on
each node.
-
- # The parent directory for the working directory.
- basedir: "{{ ansible_env.HOME }}"
-
- # The working directory.
- binarydir: "{{ basedir }}/{{ product }}"
+ * **Remote working directories**. Edit `conf/instance_settings.yml` to
change the remote binary directory
+ (the variable "binarydir") when necessary. By default, the binary
directory will be under the home directory
+ (as the value of Ansible builtin variable ansible_env.HOME) of the ssh
user account on each node.
### <a id="lifecycle">Cluster Lifecycle Management</a>
* Allocate AWS EC2 nodes (the number of nodes is specified in
`conf/instance_settings.yml`)
and deploy the binary to all allocated EC2 nodes:
- bin/deploy.sh
+ bin/deploy.sh
* Before starting the AsterixDB cluster, you the instance configuration
file `conf/instance/cc.conf`
can be modified with the exception of the IP addresses/DNS names which
are are generated and cannot
@@ -172,7 +166,7 @@
* Launch your AsterixDB cluster on EC2:
- bin/start.sh
+ bin/start.sh
Now you can use the multi-node AsterixDB cluster on EC2 by by opening the
master node
listed in `conf/instance/inventory` at port `19001` (which can be
customized in `conf/instance/cc.conf`)
@@ -180,13 +174,13 @@
* If you want to stop the AWS-based AsterixDB cluster, run the following
script:
- bin/stop.sh
+ bin/stop.sh
Note that this only stops AsterixDB but does not stop the EC2 nodes.
* If you want to terminate the EC2 nodes that run the AsterixDB cluster,
run the following script:
- bin/terminate.sh
+ bin/terminate.sh
**Note that it will destroy everything in the AsterixDB cluster you
installed and terminate all EC2 nodes
for the cluster.**
--
To view, visit https://asterix-gerrit.ics.uci.edu/1649
To unsubscribe, visit https://asterix-gerrit.ics.uci.edu/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: I22778fd5f89353850df775f60ac02c5e5d071686
Gerrit-PatchSet: 1
Gerrit-Project: asterixdb
Gerrit-Branch: master
Gerrit-Owner: Yingyi Bu <[email protected]>