Commit 008891b8 authored by Maiken's avatar Maiken
Browse files

Various cleanup changes and improvements to generalize

parent a5bdd76c
......@@ -6,4 +6,5 @@ atlact1.rfc.long.proxy
user.cert.pem
user.key.pem
act/
act.yml
\ No newline at end of file
act.yml
vars/*yml
\ No newline at end of file
......@@ -3,6 +3,8 @@ The run-sequence suggested below is based on elasticluster setting up the fronte
http://elasticluster.readthedocs.io/en/latest/
The only change in elasticluster is the after_custom.yml file which I run. A copy of it is provided in the vars folder. The file needs to be copied into the <your-elasticluster-path>/elasticluster/src/elasticluster/share/playbooks
If you simply want to set up an ARC compute-element, see section "Only ARC-CE setup"
This set of ansible playbooks has 3 main modes
1) Standard installation using the released ARC version from nordugrid repo
......@@ -10,9 +12,8 @@ This set of ansible playbooks has 3 main modes
3) Installation from source and setting up ARC-CE using LOCAL submission interface. This mode should later be separated to provide installatino from source, and if wanted set up in LOCAL mode.
Variables to set
You need to look through and possibly change variables in these places:
Change ip-s in hosts file
group_vars/all
group_vars/frontend
......@@ -23,49 +24,89 @@ roles/frontend/templates
roles/common/vars
roles/compute/defaults
roles/compute/templates (cvmfs setup)
## Notes for standard-install (ARC 5)
## To set up a cluster from scratch
### Download and configure elasticluster following http://elasticluster.readthedocs.io/en/latest/
### step0
### Edit all the files in the top vars folder with your custom values
* For cvmfs.yml, see instructions on what content to put in such a file here: https://cernvm.cern.ch/portal/filesystem/quickstart
* Have a look through the other variables in the arc-ce playbook places as mentioned above in group_vars, and in the roles/../vars folders
clustername=< your-cluster-name >
play_vars=< path-to-extra-vars-files >
local=''
installtype=standard
## Example settings:
Set these variables in your terminal:
* clustername=my_grid_cluster
* play_vars=/home/centos/myvars
* installtype=standard
* playbook=/home/centos/ansible/arc-ce/site_arc-ce.yml
* arc_repo=git
* localuser=centos
* lrmstype=slurm
playbook=< path-to-arc-playbooks >/site_arc-ce.yml
arc_repo=svn
## Common sequence for all installations except the INTERNAL installation
The nordugrid packages are installed from the nordugrid repo using the list of packages arc_frontend_packages which is defined in roles/frontend/vars/standard.yml
In addition ca_packages and dependency_packages (also defined same place) are installed at the same time.
The dependency_packages might need revision once in a while.
### step1
elasticluster -v start slurm -n $clustername
### step2
cp $play_vars/after_custom.yml to <your-elasticluster-path>/elasticluster/src/elasticluster/share/playbooks
## Notes for nightlies-install ARC 6
### step3
### step0
cd <path-to-your-elasticluster-installation>
elasticluster -v setup $clustername -- <path-to-your-elasticluster-installation>/elasticluster/src/elasticluster/share/playbooks/after_custom.yml \
--tags "after" \
--extra-vars="localuser=$localuser lrms_type=$lrmstype cluster_name=$clustername" \
--extra-vars="@$play_vars/blockstorage.yml" \
--extra-vars="@$play_vars/griduser.yml" \
--extra-vars="@$play_vars/os_env.yml" \
--extra-vars="@$play_vars/nfs_export_mounts.yml"
### step4
cd <path-to-your-arc-ce-git-clone>
ansible-playbook <path-to-your-arc-ce-git-clone>/contrib/ansible/arc-ce/site_arc-ce.yml \
-i ~/.elasticluster/storage/$clustername.inventory \
--extra-vars="localuser=$localuser installationtype=$installtype arc_major=$arc_major arc_repo=$arc_repo lrms_type=$lrmstype" \
--extra-vars="@$play_vars/griduser.yml" \
--extra-vars="@$play_vars/host_env.yml" \
--extra-vars="@$play_vars/cvmfs.yml" \
--extra-vars="@$play_vars/slurm_pwd.yml"
## Preparations for standard-install (ARC 5)
The nordugrid packages are installed from the nordugrid repo using the list of packages arc_frontend_packages which is defined in roles/frontend/vars/standard.yml
In addition ca_packages and dependency_packages (also defined same place) are installed at the same time.
The dependency_packages might need revision once in a while.
clustername=< your-cluster-name >
### step0
Set these variables in your terminal:
* export clustername=< your-cluster-name >
* export play_vars=< path-to-extra-vars-files >
* export installtype=standard
* export playbook=< path-to-arc-playbooks >/site_arc-ce.yml
* export localuser=<your-local-user-eg-centos>
* export lrmstype=<your-lrms-slurm-or-fork-or-condor>
* export arc_repo=svn
* export arc_major=5
play_vars=< path-to-extra-vars-files >
local=''
### Then do step1-3
installtype=nightlies
playbook=< path-to-arc-playbooks >/site_arc-ce.yml
arc_repo=git
## Notes for nightlies-install ARC 5
Rpms according to list in roles/frontend/vars/nightlies.yml is downloaded and put into private repo.
......@@ -74,77 +115,95 @@ Then the rpms in the local repo are installed, disabling all other repos.
## Notes for source-install - specific here for local install - arc 6
Will try to fetch nightlies from today, if these are not available, you will have to manually set the date instead of using the variable {{ ansible_date_time.date }} in /roles/frontend/tasks/install_nightlies.yml - date is in format YYYY-MM-DD
### step0
Set these variables in your terminal:
* export clustername=< your-cluster-name >
* export play_vars=< path-to-extra-vars-files >
* export installtype=nightlies
* export playbook=< path-to-arc-playbooks >/site_arc-ce.yml
* export localuser=<your-local-user-eg-centos>
* export lrmstype=<your-lrms-slurm-or-fork-or-condor>
* export arc_repo=svn
* export arc_major=5
clustername=< your-cluster-name >
play_vars=< path-to-extra-vars-files >#where nfs_export_mounts_loca.yml etc are placed
local=_local
### Then do step1-3
installtype=local
playbook=< path-to-arc-playbooks >/site_arc-ce_act.yml
arc_repo=git
## Notes for nightlies-install ARC 6
Source is checked out from git and compiled.
Dependencies are first installed - defined in roles/frontend/vars/local.yml file - ca_packages, dependency_packages_local, dependency_packages. In addition to globus toolkit.
Rpms according to list in roles/frontend/vars/nightlies.yml is downloaded and put into private repo.
Dependencies are extracted from the rpms and installed first, yum deplist with awking and sorting is used to get the correct list of dependencies.
Then the rpms in the local repo are installed, disabling all other repos.
For the local installation ARC and aCT is run as local user therefore environmental variables are set to point to the correct installation directory.
### step0
Set these variables in your terminal:
* export clustername=< your-cluster-name >
* export play_vars=< path-to-extra-vars-files >
* export installtype=nightlies
* export playbook=< path-to-arc-playbooks >/site_arc-ce.yml
* export localuser=<your-local-user-eg-centos>
* export lrmstype=<your-lrms-slurm-or-fork-or-condor>
* export arc_repo=git
* export arc_major=6
Rest of dependencies are installed using the available standard nordugrid-arc spec file which is picked up e.g. from the epel or nordugrid repo. It would maybe rather be better to extract specific dependency list in a similar way as for the nightlies-install procedure. This is because the dependency list in available spec-file might not match the version of arc we want to install. To-be-done.
### Then do step1-3
Finally, the compile commands are run.
## Example settings
clustername=my_test_cluster
## Notes for INTERNAL ARC + aCT mode - source-install - specific here for local install - arc 6 (only on git, not on svn)
play_vars=$HOME/ansible/arc-ce/vars
Source is checked out from git and compiled.
Dependencies are first installed - defined in roles/frontend/vars/local.yml file - ca_packages, dependency_packages_local, dependency_packages. In addition to globus toolkit.
local=_local
For the local installation ARC and aCT is run as local user therefore environmental variables are set to point to the correct installation directory.
installtype=local
Rest of dependencies are installed using the available standard nordugrid-arc spec file which is picked up e.g. from the epel or nordugrid repo. It would maybe rather be better to extract specific dependency list in a similar way as for the nightlies-install procedure. This is because the dependency list in available spec-file might not match the version of arc we want to install. To-be-done.
playbook=$HOME/ansible/arc-ce/site_arc-ce_act.yml
Finally, the compile commands are run.
arc_repo=git
### step0
Set these variables in your terminal:
* export clustername=< your-cluster-name >
* export play_vars=< path-to-extra-vars-files >
* export installtype=local
* export playbook=< path-to-arc-playbooks >/site_arc-ce_act.yml
* export localuser=<your-local-user-eg-centos>
* export lrmstype=<your-lrms-slurm-or-fork-or-condor>
* export arc_repo=git
* export arc_major=6
#########################################################################################
## Command sequence to instatiate cluster and install and configure ARC (and aCT)
### step1
### step 1
elasticluster -v start slurm -n $clustername
### step2
Before this step make sure to copy the vars/after_custom.yml file to <your-elasticluster-path>/elasticluster/src/elasticluster/share/playbooks
The command below assumes you are placed in the directory just above the elasticluster directory
elasticluster -v setup $clustername -- elasticluster/src/elasticluster/share/playbooks/after_custom.yml \
### step 2
cd <path-to-your-elasticluster-installation>
elasticluster -v setup $clustername -- <path-to-your-elasticluster-installation>/elasticluster/src/elasticluster/share/playbooks/after_custom.yml \
--tags "after" \
--extra-vars="localuser=centos lrms_type=slurm cluster_name=$clustername" \
--extra-vars="localuser=$localuser lrms_type=$lrmstype cluster_name=$clustername" \
--extra-vars="@$play_vars/blockstorage.yml" \
--extra-vars="@$play_vars/griduser_local.yml" \
--extra-vars="@$play_vars/os_env.yml" \
--extra-vars="@$play_vars/nfs_export_mounts_local.yml"
### step3
ansible-playbook grid-uh-cloud/ansible/site_arc-ce_act.yml \
cd <path-to-your-arc-ce-git-clone>
ansible-playbook <path-to-your-arc-ce-git-clone>/contrib/ansible/arc-ce/site_arc-ce.yml \
-i ~/.elasticluster/storage/$clustername.inventory \
--skip-tags="installarc,private-act,cvmfs,apache" \
--extra-vars="localuser=centos installationtype=local arc_major=6 arc_repo=$arc_repo lrms_type=slurm" \
--extra-vars="localuser=$localuser installationtype=$installtype arc_major=$arc_major arc_repo=$arc_repo lrms_type=$lrmstype" \
--extra-vars="@$play_vars/griduser_local.yml" \
--extra-vars="@$play_vars/os_env.yml" \
--extra-vars="@$play_vars/host_env.yml" \
--extra-vars="@$play_vars/cvmfs.yml" \
--extra-vars="@$play_vars/slurm_pwd.yml"
......@@ -157,6 +216,10 @@ ansible-playbook grid-uh-cloud/ansible/site_arc-ce_act.yml \
## Contents in the playbooks:
playbook: after_custom.yml
......
......@@ -13,11 +13,6 @@ localpkg_dir: "/var/www/html/{{ localreponame }}"
#######################
############## NIGHTLIES from nightly build system http://download.nordugrid.org/builds/index.php?pkgname=nordugrid-arc&type=trunk
arc_nightly_pkg_folder: "2018-02-27"
#######################
## other packages and dependencies can be found in the roles/frontend/vars folder
## depends on the installation type (standard, nightlies, local)
## arc_frontend_services can also be found there
......
......@@ -69,12 +69,12 @@
######## ARC nightlies from svn (rhel)
- name: NG download path for nightlies from svn
debug:
msg: "http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ arc_nightly_pkg_folder }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
msg: "http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ ansible_date_time.date }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
when: is_rhel_compatible and arc_repo == "svn" and arc_repo != "git"
- name: Get hold of nightly packages from svn (rhel)
command: "wget -r -A '*{{ item }}*.{{ pkg_ext }}' --level 1 --no-parent -nd http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ arc_nightly_pkg_folder }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
command: "wget -r -A '*{{ item }}*.{{ pkg_ext }}' --level 1 --no-parent -nd http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ ansible_date_time.date }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
with_items: '{{ arc_frontend_packages }}'
args:
chdir: "{{ localpkg_dir }}"
......@@ -86,12 +86,12 @@
######## ARC nightlies from svn debian
- name: NG download path for nightlies from svn (debian)
debug:
msg: "http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ arc_nightly_pkg_folder }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
msg: "http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ ansible_date_time.date }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
when: is_debian_compatible and arc_repo == "svn" and arc_repo != "git"
- name: Get hold of nightly packages from svn (debian)
command: "wget -r -A '*{{ item }}*.{{ pkg_ext }}' --level 1 --no-parent -nd http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ arc_nightly_pkg_folder }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
command: "wget -r -A '*{{ item }}*.{{ pkg_ext }}' --level 1 --no-parent -nd http://builds.nordugrid.org/nightlies/packages/nordugrid-arc/{{ arc_version }}/{{ ansible_date_time.date }}/{{ os }}/{{ os_v }}/{{ frontend_arch }}/"
with_items: '{{ arc_frontend_packages }}'
args:
chdir: "{{ localpkg_dir }}"
......
---
#
# This playbook is for site-local customization to ElastiCluster's
# playbooks. It runs *after* any other playbook distributed with
# ElastiCluster has gotten its chance to run.
#
# An empty playbook is checked into the Git repository. If you make
# any local modifications, please run `git update-index
# --assume-unchanged after.yml` to avoid committing them accidentally
# into ElastiCluster's main branch.
# the nfs-server coincides with slurm-master or frontend
# the nfs-client coincides with slurm-worker or compute
#### Install shade to be able to use openstack ansible module
- hosts: all
tags:
- after
- local
tasks:
- name: Update packages
yum:
name: '*'
state: latest
exclude: kernel*
- name: Dependencies for shade
yum:
name: "{{ item }}"
state: present
with_items:
- epel-release
- python-devel
- openssl-devel
- "@Development Tools"
- python-pip
- name: Install shade
command: pip install shade
- hosts: all
tags:
- after
- local
tasks:
- name: Create grid group
group: name={{ group_name_grid }} state=present
- name: Create generic grid user
user: "name={{ user_name_grid }} group={{ group_name_grid }} state=present createhome=no"
#### Volumes on frontend
- hosts: frontend
tags:
- after
- local
tasks:
- name: openstack volume | create volume for frontend
environment: "{{ os_env }}"
os_volume:
state: present
size: "{{ item.size }}"
display_name: "{{ item.name }}"
with_items: "{{ blockstorage_frontend }}"
- name: openstack volume | attach volume to frontend host
environment: "{{ os_env }}"
os_server_volume:
state: present
server: "{{ cluster_name }}-{{ ansible_hostname }}"
volume: "{{ item.name }}"
device: "{{ item.src }}"
with_items: "{{ blockstorage_frontend }}"
- name: Create filesystem
filesystem:
fstype: "{{ item.fstype }}"
dev: "{{ item.src }}"
with_items: "{{ blockstorage_frontend }}"
- name: Ensure directories exist
file:
path: "{{ item.path }}"
state: directory
owner: "{{ user_name_grid }}"
group: "{{ group_name_grid }}"
mode: 0755
with_items: "{{ blockstorage_frontend }}"
- name: Add mountpoints in fstab
mount:
fstype: "{{ item.fstype }}"
path: "{{ item.path }}"
src: "{{ item.src }}"
state: mounted
with_items: "{{ blockstorage_frontend }}"
### Volumes on compute
- hosts: compute
tags:
- after
- local
tasks:
- name: openstack volume | create volume for compute
environment: "{{ os_env }}"
os_volume:
state: present
size: "{{ item.size }}"
display_name: "{{ item.name }}"
with_items: "{{ blockstorage_compute }}"
- name: openstack volume | attach volume to compute host
environment: "{{ os_env }}"
os_server_volume:
state: present
server: "{{ cluster_name }}-{{ ansible_hostname }}"
volume: "{{ item.name }}"
device: "{{ item.src }}"
with_items: "{{ blockstorage_compute }}"
- name: Create filesystem
filesystem:
fstype: "{{ item.fstype }}"
dev: "{{ item.src }}"
with_items: "{{ blockstorage_compute }}"
- name: Ensure directories exist
file:
path: "{{ item.path }}"
state: directory
owner: root
group: root
mode: 0755
with_items: "{{ blockstorage_compute }}"
- name: Add mountpoints in fstab
mount:
fstype: "{{ item.fstype }}"
path: "{{ item.path }}"
src: "{{ item.src }}"
state: mounted
with_items: "{{ blockstorage_compute }}"
###### Slurm hack
- hosts: all
tags:
- after
- local
tasks:
- name: Comment out the VSizeFactor for grid jobs
lineinfile:
path: /etc/slurm/slurm.conf
regexp: '^VSizeFactor'
line: '#VSizeFactor'
backup: yes
ignore_errors: yes
###### NFS
- hosts: frontend
tags:
- after
- local
tasks:
- name: After - Ensure shared dirs exist on nfs server
file:
path: '{{ item.path }}'
state: directory
owner: "{{ localuser }}"
group: "{{ localuser }}"
mode: 0755
with_items: '{{ NFS_EXPORTS }}'
- name: After - roles for nfs-server
include_role:
name: 'nfs-server'
- hosts: compute
tags:
- after
- local
tasks:
- name: 'ensure {{ item. mountpoint }} directory exists and owned by user'
file:
path: '{{ item.mountpoint }}'
state: directory
group: "{{ localuser }}"
owner: "{{ localuser }}"
mode: 0755
with_items: '{{ NFS_MOUNTS }}'
- name: After - mount nfs shares
mount:
name: '{{item.mountpoint}}'
src: '{{item.fs}}'
fstype: nfs
opts: '{{item.options|default("rw,async")}}'
state: mounted
with_items: '{{ NFS_MOUNTS }}'
- name: After - nfs-client - add to fstab
include_role:
name: 'nfs-client'
- name: After - Restart SLURMd after all config is done (debian)
service:
name: slurmd
state: restarted
when: '{{is_debian_compatible}} and ({{is_debian_8_or_later}} or {{is_ubuntu_15_10_or_later}})'
- name: After - Restart slurm-llnl after all config is done (debian)
service:
name: slurm-llnl
state: restarted
when: '{{is_debian_compatible}} and (not ({{is_debian_8_or_later}} or {{is_ubuntu_15_10_or_later}}))'
- name: After - Restart SLURMd after all config is done (rhel7)
service:
name: slurmd
state: restarted
when: '{{is_rhel7_compatible}}'
- name: After - Restart SLURMd after all config is done (rhel6)
service:
name: slurm
state: restarted
when: '{{is_rhel6_compatible}}'
...
\ No newline at end of file
CVMFS_REPOSITORIES: atlas.cern.ch,atlas-condb.cern.ch
CVMFS_HTTP_PROXY: <enter-your-https-proxies-here>
CVMFS_REPOSITORIES: atlas.cern.ch,atlas-condb.cern.ch
CVMFS_QUOTA_LIMIT: 20000
CVMFS_CACHE_BASE: /atlas_cvmfs
CVMFS_SHARED_CACHE: yes
\ No newline at end of file
---
local_env_rhel6:
PATH: "{{ install_dir }}/bin:{{ install_dir }}/sbin:$PATH"
LD_LIBRARY_PATH: "{{ install_dir }}/lib/arc"
PYTHONPATH: "{{ install_dir }}/aCT/src:{{ install_dir }}/lib64/python2.6/site-packages:{{ install_dir }}/lib64/python2.6/site-packages/arc:$PYTHONPATH"
ARC_LOCATION: "{{ install_dir }}"
ARC_CONFIG: "{{ install_dir }}/etc/arc.conf"
X509_USER_PROXY: "{{ grid_homedir }}/atlacrt1.rfc.long.proxy"
local_env_rhel7:
PATH: "{{ install_dir }}/bin:{{ install_dir }}/sbin:$PATH"
LD_LIBRARY_PATH: "{{ install_dir }}/lib/arc"
PYTHONPATH: "{{ install_dir }}/aCT/src:{{ install_dir }}/lib64/python2.7/site-packages:{{ install_dir }}/lib64/python2.7/site-packages/arc:$PYTHONPATH"
ARC_LOCATION: "{{ install_dir }}"
ARC_CONFIG: "{{ install_dir }}/etc/arc.conf"
X509_USER_PROXY: "{{ grid_homedir }}/atlacrt1.rfc.long.proxy"
...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment