Commit d92b8c85 authored by Maiken's avatar Maiken
Browse files

Major changes and updates

parent 12dc14d9
The run-sequence suggested below is based on elasticluster setting up the frontend and compute nodes first, with slurm, nfs and everything else needed.
http://elasticluster.readthedocs.io/en/latest/
The only change in elasticluster is the after_custom.yml file which I run. A copy of it is provided here (in elasticluster-after folder), but it actually lives in elasticluster/src/elasticluster/share/playbooks/ folder.
The only change in elasticluster is the after_custom.yml file which I run. A copy of it is provided in the vars folder. The file needs to be copied into the <your-elasticluster-path>/elasticluster/src/elasticluster/share/playbooks
This set of ansible playbooks has 3 main modes
......@@ -10,7 +10,7 @@ This set of ansible playbooks has 3 main modes
3) Installation from source and setting up ARC-CE using LOCAL submission interface. This mode should later be separated to provide installatino from source, and if wanted set up in LOCAL mode.
Variables to set:
Variables to set
Change ip-s in hosts file
......@@ -27,55 +27,72 @@ roles/compute/templates (cvmfs setup)
--------------------
Notes for standard-install
## Notes for standard-install (ARC 5)
### step0
clustername=< your-cluster-name >
play_vars=< path-to-extra-vars-files >
local=''
installtype=standard
playbook=< path-to-arc-playbooks >/site_arc-ce.yml
arc_repo=svn
The nordugrid packages are installed from the nordugrid repo using the list of packages arc_frontend_packages which is defined in roles/frontend/vars/standard.yml
In addition ca_packages and dependency_packages (also defined same place) are installed at the same time.
The dependency_packages might need revision once in a while.
Run-sequence (specify wanted name -n)
elasticluster -v start slurm -n gridclusterX
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "users" --extra-vars="installationtype=standard"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "volumes" --extra-vars="installationtype=standard"
elasticluster -v setup gridclusterX -- elasticluster/src/elasticluster/share/playbooks/after_custom.yml --tags "after"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "installarc" --extra-vars="installationtype=standard"
ansible-playbook grid-uh-cloud/ansible/compute.yml -i grid-uh-cloud/ansible/hosts --tags "cvmfs,volumes"
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "disable_selinux"
## Notes for nightlies-install ARC 6
### step0
clustername=< your-cluster-name >
play_vars=< path-to-extra-vars-files >
local=''
installtype=nightlies
playbook=< path-to-arc-playbooks >/site_arc-ce.yml
arc_repo=git
--------------------
Notes for nightlies-install
Rpms according to list in roles/frontend/vars/nightlies.yml is downloaded and put into private repo.
Dependencies are extracted from the rpms and installed first, yum deplist with awking and sorting is used to get the correct list of dependencies.
Then the rpms in the local repo are installed, disabling all other repos.
Run-sequence (specify wanted name -n)
ARC 5 nightlies installation
elasticluster -v start slurm -n gridclusterX
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "users" --extra-vars="installationtype=nightlies"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "volumes" --extra-vars="installationtype=nightlies"
elasticluster -v setup gridclusterX -- elasticluster/src/elasticluster/share/playbooks/after_custom.yml --tags "after"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "installarc" --extra-vars="installationtype=nightlies"
ansible-playbook grid-uh-cloud/ansible/compute.yml -i grid-uh-cloud/ansible/hosts --tags "cvmfs,volumes"
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "disable_selinux"
ARC 6 nightlies installation
elasticluster -v start slurm -n gridcluster-arc6
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "users" --extra-vars="installationtype=nightlies"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "volumes" --extra-vars="installationtype=nightlies"
elasticluster -v setup gridcluster-arc6 -- elasticluster/src/elasticluster/share/playbooks/after_custom.yml --tags "after"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "installarc" --extra-vars="installationtype=nightlies arc_major=6"
ansible-playbook grid-uh-cloud/ansible/compute.yml -i grid-uh-cloud/ansible/hosts --tags "cvmfs,volumes"
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "disable_selinux"
## Notes for source-install - specific here for local install - arc 6
### step0
clustername=< your-cluster-name >
play_vars=< path-to-extra-vars-files >#where nfs_export_mounts_loca.yml etc are placed
local=_local
installtype=local
------------------
Notes for source-install - specific here for local install - arc 5
Source is checked out from svn and compiled.
playbook=< path-to-arc-playbooks >/site_arc-ce_act.yml
arc_repo=git
Source is checked out from git and compiled.
Dependencies are first installed - defined in roles/frontend/vars/local.yml file - ca_packages, dependency_packages_local, dependency_packages. In addition to globus toolkit.
For the local installation ARC and aCT is run as local user therefore environmental variables are set to point to the correct installation directory.
......@@ -85,15 +102,238 @@ Rest of dependencies are installed using the available standard nordugrid-arc sp
Finally, the compile commands are run.
Run-sequence (specify wanted name for the -n option)
elasticluster -v start slurm -n gridclusterX
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "users" --extra-vars="installationtype=local" #not necessary if user is default user (e.g. centos)
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "volumes" --extra-vars="installationtype=local"
elasticluster -v setup gridclusterX -- elasticluster/src/elasticluster/share/playbooks/after_custom_local.yml --tags "after"
ansible-playbook grid-uh-cloud/ansible/frontend.yml -i grid-uh-cloud/ansible/hosts --tags "installarc" --extra-vars="installationtype=local"
ansible-playbook grid-uh-cloud/ansible/act.yml -i grid-uh-cloud/ansible/hosts --tags "act" --extra-vars="installationtype=local"
ansible-playbook grid-uh-cloud/ansible/compute.yml -i grid-uh-cloud/ansible/hosts --tags "cvmfs,volumes" --extra-vars="installationtype=local"
ansible-playbook grid-uh-cloud/ansible/common.yml -i grid-uh-cloud/ansible/hosts --tags "disable_selinux"
#########################################################################################
## Command sequence to instatiate cluster and install and configure ARC (and aCT)
### step1
elasticluster -v start slurm -n $clustername
### step2
Before this step make sure to copy the vars/after_custom.yml file to <your-elasticluster-path>/elasticluster/src/elasticluster/share/playbooks
The command below assumes you are placed in the directory just above the elasticluster directory
elasticluster -v setup $clustername -- elasticluster/src/elasticluster/share/playbooks/after_custom.yml \
--tags "after" \
--extra-vars="localuser=centos lrms_type=slurm cluster_name=$clustername" \
--extra-vars="@$play_vars/blockstorage.yml" \
--extra-vars="@$play_vars/griduser_local.yml" \
--extra-vars="@$play_vars/os_env.yml" \
--extra-vars="@$play_vars/nfs_export_mounts_local.yml"
### step3
ansible-playbook grid-uh-cloud/ansible/site_arc-ce_act.yml \
-i ~/.elasticluster/storage/$clustername.inventory \
--skip-tags="installarc,private-act,cvmfs,apache" \
--extra-vars="localuser=centos installationtype=local arc_major=6 arc_repo=$arc_repo lrms_type=slurm" \
--extra-vars="@$play_vars/griduser_local.yml" \
--extra-vars="@$play_vars/os_env.yml" \
--extra-vars="@$play_vars/host_env.yml" \
--extra-vars="@$play_vars/slurm_pwd.yml"
## Contents in the playbooks:
playbook: after_custom.yml
play #1 (all): all TAGS: [after,local]
tasks:
Update packages TAGS: [after, local]
Dependencies for shade TAGS: [after, local]
Install shade TAGS: [after, local]
play #2 (all): all TAGS: [after,local]
tasks:
Create grid group TAGS: [after, local]
Create generic grid user TAGS: [after, local]
play #3 (frontend): frontend TAGS: [after,local]
tasks:
openstack volume | create volume for frontend TAGS: [after, local]
openstack volume | attach volume to frontend host TAGS: [after, local]
Create filesystem TAGS: [after, local]
Ensure directories exist TAGS: [after, local]
Add mountpoints in fstab TAGS: [after, local]
play #4 (compute): compute TAGS: [after,local]
tasks:
openstack volume | create volume for compute TAGS: [after, local]
openstack volume | attach volume to compute host TAGS: [after, local]
Create filesystem TAGS: [after, local]
Ensure directories exist TAGS: [after, local]
Add mountpoints in fstab TAGS: [after, local]
play #5 (all): all TAGS: [after,local]
tasks:
Comment out the VSizeFactor for grid jobs TAGS: [after, local]
play #6 (frontend): frontend TAGS: [after,local]
tasks:
After - Ensure shared dirs exist on nfs server TAGS: [after, local]
nfs-server : Load distribution-specific parameters TAGS: [after, local, nfs, nfs-server]
nfs-server : install NFS server software TAGS: [after, local, nfs, nfs-server]
nfs-server : Export directories TAGS: [after, local, nfs, nfs-server]
nfs-server : Ensure NFS server is running (Debian 8 "jessie") TAGS: [after, local, nfs, nfs-server]
nfs-server : Ensure NFS server is running TAGS: [after, local, nfs, nfs-server]
nfs-server : Reload NFS exports file TAGS: [after, local, nfs, nfs-server]
play #7 (compute): compute TAGS: [after,local]
tasks:
ensure {{ item. mountpoint }} directory exists and owned by user TAGS: [after, local]
After - mount nfs shares TAGS: [after, local]
nfs-client : install NFS client software (Debian/Ubuntu) TAGS: [after, local, nfs, nfs-client]
nfs-client : install NFS client software (RHEL-compatible) TAGS: [after, local, nfs, nfs-client]
nfs-client : Ensure `rpcbind` is running (Debian) TAGS: [after, local]
nfs-client : Ensure `rpcbind` is running (RHEL-compatible) TAGS: [after, local]
nfs-client : Ensure `portmap` is running (Ubuntu prior to 14.04) TAGS: [after, local]
nfs-client : Ensure `rpcbind` is running (Ubuntu 14.04 or newer) TAGS: [after, local]
nfs-client : Mount NFS filesystems TAGS: [after, local, nfs, nfs-client]
After - Restart SLURMd after all config is done (debian) TAGS: [after, local]
After - Restart slurm-llnl after all config is done (debian) TAGS: [after, local]
After - Restart SLURMd after all config is done (rhel7) TAGS: [after, local]
After - Restart SLURMd after all config is done (rhel6) TAGS: [after, local]
playbook: site_arc-ce_act.yml
play #1 (all): Hack to get correct os_v name not working in group_vars/all TAGS: []
tasks:
set some facts (el6) TAGS: [always]
play #2 (all): Hack to get correct os_v name not working in group_vars/all TAGS: []
tasks:
set_come_facts (el7) TAGS: [always]
play #3 (all): =====> Debug TAGS: []
tasks:
output stuff TAGS: [always]
play #4 (frontend): Install and configure Nordugrid ARC on frontend TAGS: [installarc]
tasks:
frontend : Load the correct variables file (standard.yml) TAGS: [always, installarc]
frontend : Load the correct variables file (nightlies.yml) TAGS: [always, installarc]
frontend : Load the correct variables file (arc6.yml also for nightlies arc6) TAGS: [always, installarc]
frontend : Load the correct variables file (local_arc5.yml) TAGS: [always, installarc]
frontend : Load the correct variables file (local_arc6.yml) TAGS: [always, installarc]
frontend : Check if apache (httpd) service exist TAGS: [apache, installarc]
frontend : Install apache2 TAGS: [apache, installarc]
frontend : Enable apache TAGS: [apache, installarc]
frontend : Start apache TAGS: [apache, installarc]
frontend : Install firewalld TAGS: [apache, installarc]
frontend : Start firewalld TAGS: [apache, installarc]
frontend : Enable firewalld TAGS: [apache, installarc]
frontend : Configure firewall http TAGS: [apache, installarc]
frontend : Configure firewall https TAGS: [apache, installarc]
frontend : Restart firewall TAGS: [apache, installarc]
frontend : Start apache TAGS: [apache, installarc]
frontend : Ensure directories exist TAGS: [installarc]
frontend : Ensure grid-security folder exists TAGS: [gridmap, installarc]
frontend : ensure grid-mapfile exists TAGS: [gridmap, installarc]
frontend : add maikenp to grid-mapfile as griduser TAGS: [gridmap, installarc]
frontend : add aCT to grid-mapfile as griduser TAGS: [gridmap, installarc]
frontend : Ensure /etc/grid-security folder exists TAGS: [certif, installarc]
frontend : Determine if CertificateGenerator.py is already downloaded TAGS: [certif, installarc]
frontend : Download CertificateGenerator.py TAGS: [certif, installarc]
frontend : Run CertificateGenerator.py to create host certificate TAGS: [certif, installarc]
frontend : checks hostcert exists on remote path TAGS: [certif, installarc]
frontend : checks hostkey exists on remote path TAGS: [certif, installarc]
frontend : copy host certificate key file if it exists TAGS: [certif, installarc]
frontend : copy host certificate file if it exists TAGS: [certif, installarc]
frontend : checks ca pem exists on remote path TAGS: [certif, installarc]
frontend : copy CA to /etc/grid-security/certificates TAGS: [certif, installarc]
frontend : checks ca signing policy exists on remote path TAGS: [certif, installarc]
frontend : copy ca signing policy file if it exists TAGS: [certif, installarc]
frontend : Get hold of hash for pem file TAGS: [certif, installarc]
frontend : Get hold of old hash for pem file TAGS: [certif, installarc]
frontend : Create softlinks to pem file TAGS: [certif, installarc]
frontend : Ensure controldir exist TAGS: [always, installarc]
frontend : Include installation play for standard installation from nordugrid repo TAGS: [installarc]
frontend : Include installation play for nightlies installation TAGS: [installarc]
frontend : Include installation play for LOCAL plugin installation TAGS: [installarc]
frontend : Include arcconf.yml TAGS: [arcconf, installarc]
frontend : Create runtime APPS/HEP directory TAGS: [installarc, runtime]
frontend : Copy ATLAS-SITE file to APPS/HEP shared folder TAGS: [installarc, runtime]
frontend : Create runtime APPS/PRACE directory TAGS: [installarc, runtime]
frontend : Copy DOCKER file to APPS/PRACE shared folder TAGS: [installarc, runtime]
frontend : Create cron job that copies runtime apps from CVMFS TAGS: [installarc, runtime]
frontend : Include slurm.yml TAGS: [installarc, myslurm]
frontend : Include startarc.yml if installationtype is standard or nightlies TAGS: [installarc, startarc]
frontend : Include startarc_local.yml if installationtype is local TAGS: [installarc, startarc]
play #5 (frontend): Copy usercert and key to frontend - including some useful scripts TAGS: [private-act,private-arc]
tasks:
private-act : Copy other useful files TAGS: [private-act, private-arc, useful]
private-arc : Create .globus dir for usercert TAGS: [private-act, private-arc, usercert]
private-arc : Copy usercert and userkey to .globus dir TAGS: [private-act, private-arc, usercert]
private-arc : Change ownership on key TAGS: [private-act, private-arc, usercert]
private-arc : Copy hello-world test submission scripts TAGS: [private-act, private-arc]
play #6 (frontend): Install and configure Nordugrid aCT on frontend TAGS: [installact]
tasks:
act : Load the correct variables file (rhel6.yml) TAGS: [installact]
act : Load the correct variables file (rhel7.yml) TAGS: [installact]
act : Install pip TAGS: [installact]
act : Install required pexpect python module needed to use expect ansible module on host TAGS: [installact]
act : Copy vomses file TAGS: [installact]
act : Install needed dependency (mysql-connector-python) TAGS: [installact]
act : Install needed perl stuff cpanminus TAGS: [installact]
act : Install needed perl module JSON/XS.pm TAGS: [installact]
act : Check if {{ install_dir }}/aCT already exists TAGS: [installact]
act : Remove {{ install_dir }}/aCT if it exists TAGS: [installact]
act : Check out aCT TAGS: [installact]
act : Stat the aCT dir TAGS: [installact]
act : Place template aCTConfigARC in correct location TAGS: [installact]
act : Place template aCTConfigATLAS in correct location TAGS: [installact]
act : Copy to home folder aCTConfigARC TAGS: [installact]
act : Place template aCTConfigATLAS in correct location TAGS: [installact]
act : Prepare proxy folder TAGS: [installact]
act : Copy the act-long-proxy TAGS: [installact]
act : Create a new proxy file TAGS: [installact]
act : Create bashrc file from template TAGS: [installact]
act : Hack to move bashrc to .bashrc TAGS: [installact]
act : set paths TAGS: [installact]
act : show pythonpath TAGS: [installact]
act : show env TAGS: [installact]
act : Add own addnewjob script TAGS: [installact]
act : Create act database TAGS: [installact]
act : Create user centos at localhost in mysql TAGS: [installact]
act : Create arc table TAGS: [installact]
act : Create proxy table TAGS: [installact]
act : Create panda table TAGS: [installact]
act : Restart aCT TAGS: [installact]
play #7 (compute): Configure compute node(s) TAGS: [cvmfs]
tasks:
compute : Install repo for cvmfs TAGS: [cvmfs]
compute : Install cvmfs stuff TAGS: [cvmfs]
compute : Change permissions of auto.master to manually change auto.master to include cvmfs - for some reason having troubles with cvmfs_config setup TAGS: [cvmfs]
compute : prepare basic setup TAGS: [cvmfs]
compute : Manually change auto.master to include cvmfs - for some reason having troubles with cvmfs_config setup TAGS: [cvmfs]
compute : Template cvmfs local config TAGS: [cvmfs]
compute : Restart autofs TAGS: [cvmfs]
compute : Reload cvmfs config TAGS: [cvmfs]
play #8 (all): Cluster disable selinux and reboot TAGS: [disable_selinux]
tasks:
common : Load the correct variables file (standard.yml also for nightlies) TAGS: [disable_selinux]
common : Load the correct variables file (local.yml) TAGS: [disable_selinux]
common : Create grid group TAGS: [disable_selinux, users]
common : Create generic grid user TAGS: [disable_selinux, users]
common : Include condor.yml for condor installation if applicable TAGS: [condor, disable_selinux]
common : Disable SELinux, will take action once the cluster is rebooted TAGS: [disable_selinux, selinux]
common : Reboot the server for selinux disabled to take effect TAGS: [disable_selinux, selinux]
common : Wait for the server to reboot TAGS: [disable_selinux, selinux]
\ No newline at end of file
......@@ -2,7 +2,7 @@
---
- hosts: cluster
gather_facts: no
gather_facts: yes
become: yes
roles:
......
---
- hosts: frontend
gather_facts: yes
become: yes
roles:
- frontend
...
\ No newline at end of file
...
#OS_PROJECT_NAME: uio-test-hpc-grid
## Needs to be changed manually before running
## Needs editing for LOCAL plugin installation versus normal installation
frontend_ip: "{{ hostvars[groups['frontend'][0]].ansible_default_ipv4.address }}"
frontend_name: "{{ hostvars[groups['frontend'][0]].ansible_hostname }}"
localuser: "centos"
slurm_db_pw: "your-password-here"
queue: main
#this will give only 1 compute ip? What if I have >1 compute nodes
#intention is to here distinguish between frontend and any compute nodes
#must check!
compute_ip: "{{ hostvars[groups['compute'][0]].ansible_default_ipv4.address }}"
compute_name: "{{ hostvars[groups['compute'][0]].ansible_hostname }}"
machine_ip: "{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}"
machine_name: "{{ ansible_hostname }}"
##Setting default user_name and group_name. This could be overwitten in roles/<therole>/vars files, so check that
user_name_grid: "griduser"
group_name_grid: "grid"
#user_name_grid: "centos"
#group_name_grid: "centos"
#for controldir, logs and software
grid_homedir: "/grid"
#for sessiondir, runtime, cache
arc_frontend_griddir: "/wlcg"
#for controldir and logs
grid_homedir: "/grid"
shared_scratch: "{{ arc_frontend_griddir }}"
arc_frontend_cachedir: "{{ arc_frontend_griddir }}/cache"
arc_frontend_sessiondir: "{{ arc_frontend_griddir }}/session"
arc_frontend_runtimedir: "{{ arc_frontend_griddir }}/runtime"
arc_frontend_controldir: "{{ grid_homedir }}/control"
cvmfs_path: "/atlas_cvmfs"
frontend_dirs:
- "{{arc_frontend_sessiondir}}"
- "{{arc_frontend_runtimedir}}"
- "{{arc_frontend_controldir}}"
- "{{arc_frontend_cachedir}}"
install_dir: "{% if installationtype=='local' %}{{grid_homedir}}/software{% else %}/usr/local{% endif %}"
globus_package_base_url: https://downloads.globus.org/toolkit/gt6/stable/installers/repo
globus_package_rhel: globus-toolkit-repo-latest.noarch.rpm
globus_package_deb: globus-toolkit-repo-latest_all.deb
globus_package_rhel_url: "{{ globus_package_base_url}}/rpm/{{globus_package_rhel }}"
globus_package_deb_url: "{{ globus_package_base_url}}/deb/{{globus_package_deb }}"
# broad OS family, used to set package manager, etc.
is_debian_compatible: (ansible_os_family == 'Debian')
is_rhel_compatible: (ansible_os_family == 'RedHat')
# distributions by name
is_centos: (ansible_distribution in ['CentOS', 'Scientific'])
is_debian: "('{{ansible_distribution}}' == 'Debian')"
is_scientific_linux: "('{{ansible_distribution}}' == 'Scientific')"
is_ubuntu: "('{{ansible_distribution}}' == 'Ubuntu')"
is_debian_or_ubuntu: "({{is_debian}} or {{is_ubuntu}})"
# Debian releases by version
is_debian_7: "({{is_debian}} and {{ansible_distribution_major_version}}|int == 7)"
is_debian_8: "({{is_debian}} and {{ansible_distribution_major_version}}|int == 8)"
is_debian_9: "({{is_debian}} and {{ansible_distribution_major_version}}|int == 9)"
# Debian release ranges
is_debian_7_or_later: "({{is_debian}} and {{ansible_distribution_major_version}}|int >= 7)"
is_debian_8_or_later: "({{is_debian}} and {{ansible_distribution_major_version}}|int >= 8)"
is_debian_9_or_later: "({{is_debian}} and {{ansible_distribution_major_version}}|int >= 9)"
# RHEL family releases by version
is_rhel5_compatible: "({{is_rhel_compatible}} and {{ansible_distribution_major_version}}|int == 5)"
is_rhel6_compatible: (is_rhel_compatible and ansible_distribution_major_version|int == 6)
is_rhel7_compatible: (is_rhel_compatible and ansible_distribution_major_version|int == 7)
is_el6: (is_centos and ansible_distribution_major_version|int == 6)
is_el7: (is_centos and ansible_distribution_major_version|int == 7)
# RHEL family release ranges
is_rhel6_or_later_compatible: "({{is_rhel_compatible}} and {{ansible_distribution_major_version}}|int >= 6)"
is_rhel7_or_later_compatible: "({{is_rhel_compatible}} and {{ansible_distribution_major_version}}|int >= 7)"
# Ubuntu releases by version
is_ubuntu_12_04: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '12.04')"
is_ubuntu_12_10: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '12.10')"
is_ubuntu_13_04: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '13.04')"
is_ubuntu_13_10: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '13.10')"
is_ubuntu_14_04: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '14.04')"
is_ubuntu_14_10: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '14.10')"
is_ubuntu_15_04: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '15.04')"
is_ubuntu_15_10: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '15.10')"
is_ubuntu_16_04: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '16.04')"
is_ubuntu_16_10: "({{is_ubuntu}} and '{{ansible_distribution_version}}' == '16.10')"
# Ubuntu release ranges
is_ubuntu_12_04_or_later: "({{is_ubuntu}} and {{ansible_distribution_major_version}}|int >= 12)"
is_ubuntu_13_04_or_later: "({{is_ubuntu}} and {{ansible_distribution_major_version}}|int >= 13)"
is_ubuntu_14_04_or_later: "({{is_ubuntu}} and {{ansible_distribution_major_version}}|int >= 14)"
is_ubuntu_15_04_or_later: "({{is_ubuntu}} and {{ansible_distribution_major_version}}|int >= 15)"
is_ubuntu_16_04_or_later: "({{is_ubuntu}} and {{ansible_distribution_major_version}}|int >= 16)"
is_ubuntu_12_10_or_later: "({{is_ubuntu}} and ('{{ansible_distribution_version}}' == '12.10' or {{ansible_distribution_major_version}}|int > 12))"
is_ubuntu_13_10_or_later: "({{is_ubuntu}} and ('{{ansible_distribution_version}}' == '13.10' or {{ansible_distribution_major_version}}|int > 13))"
is_ubuntu_14_10_or_later: "({{is_ubuntu}} and ('{{ansible_distribution_version}}' == '14.10' or {{ansible_distribution_major_version}}|int > 14))"
is_ubuntu_15_10_or_later: "({{is_ubuntu}} and ('{{ansible_distribution_version}}' == '15.10' or {{ansible_distribution_major_version}}|int > 15))"
install_dir: "/usr/local"
#install_dir: "{{ grid_homedir }}/software"
### Variable needed to pick up correct rpms from nightly build system
### nothing of this work, first test always true what is correct syntax???
### instead setting the correct variables with when condition in site_arc-ce_act.yml playbook
##only set up for now for centos and debian
#os: "{% if is_centos %}centos{% elif is_debian %}debian{% endif %}"
#os_v: "{% if is_rhel6_compatible %}el6{% elif is_rhel7_compatible %}el7{% elif is_debian_8 %}8{% elif is_debian_9 %}9{% endif %}"
#pkg_ext: "{% if is_rhel_compatible %}rpm{% elif is_debian_compatible%}deb{% endif %}"
#arc_frontend_runtimedir_cvmfs: "{% if is_el6 %}/cvmfs/fgi.csc.fi/runtimes/el6{% elif is_el7 %}/cvmfs/fgi.csc.fi/runtimes/el7{% endif %}"
init_griduser_accts: true
\ No newline at end of file
#needs checking might be some double up here and there
arc_frontend_release: "15.03"
#frontend_ip: "{{ hostvars['frontend001'].ansible_default_ipv4.address }}"
frontend_ip: "{{ hostvars[inventory_hostname].ansible_default_ipv4.address }}"
#must correspond to queue in arc.conf
queue: "main"
panda_queue: "UIO_CLOUD"
################ RHEL
##change name eventually to myyumrepo and adjust in install_nightlies.yml
localreponame: myrepo
localpkg_dir: "/var/www/html/{{ localreponame }}"
#set with argument --extra-vars "installationtype=nightlies" when running play, or set here
# values are nightlies, standard, default is standard
# installatintype can be set on cl at runtime --extra-vars "installationtype=local"
installationtype: ""
#######################
#if local-install arc_version must be set of form trunk or mybranch
#arc_version: "local-plugin"
############## NIGHTLIES from nightly build system http://download.nordugrid.org/builds/index.php?pkgname=nordugrid-arc&type=trunk
arc_nightly_pkg_folder: "2018-02-27"
#######################
#if nightlies these must be set
localreponame: myyumrepo
localrpm_dir: "/var/www/html/{{ localreponame }}"
arc_version: "6.0"
arc_nightly_rpm_folder: "2018-01-09"
frontend_os: "centos"
frontend_os_v: "el6"
frontend_arch: "x86_64"
## packages and dependencies can be found in the roles/frontend/vars folder
## other packages and dependencies can be found in the roles/frontend/vars folder
## depends on the installation type (standard, nightlies, local)
## arc_frontend_services can also be found there
arc_frontend_packages:
- nordugrid-arc
- nordugrid-arc-arex
- nordugrid-arc-gridftpd
- nordugrid-arc-gridmap-utils
- nordugrid-arc-ldap-infosys
- nordugrid-arc-aris
- nordugrid-arc-arex
- nordugrid-arc-hed
- nordugrid-arc-python
- nordugrid-arc-plugins-globus
- nordugrid-arc-plugins-xrootd
- nordugrid-arc-plugins-needed
- nordugrid-arc-ldap-infosys
- python2-nordugrid-arc
ca_packages:
- ca_policy_igtf-classic
- ca_policy_igtf-mics
- ca_policy_igtf-slcs
dependency_packages:
- net-tools
- time
- perl-Inline
- perl-Inline-Python
- perl-JSON
- perl-JSON-XS
...
\ No newline at end of file
......@@ -8,47 +8,62 @@
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[cluster]