Skip to content

/etc/rsyslog.conf edits with ansible

The host called “wmd1” used to be the remote logger for all the LINUX boxes in the “left” data center. In the “right” data center it was “wmd2”. For whatever reason its replacement in the “left” data center is now called “wmd7”. The rest is Ansible playbook allowing for a mass edits across all the “left” boxes.

- hosts:
  remote_user: root

   - name: copy /etc/rsyslog.conf to /etc/rsyslog.conf.OLD
     copy: src=/etc/rsyslog.conf dest=/etc/rsyslog.conf.OLD force=no

   - name: replace the name of remote logger or insert it if missing
     shell: grep .*wmd.*\ /etc/rsyslog.conf && sed --in-place 's/wmd.*\' /etc/rsyslog.conf || echo '*.*' >> /etc/rsyslog.conf ; service rsyslog restart

The shell line is one long line – there are no folding characters above.

Posted in Linux, LINUX.

Tagged with , , , .

copying files among hosts with ansible

To copy one file from one machine to a set of hosts? Under the ahd label, there is a group of hosts in /etc/ansible/hosts.


There is a host called sysmgttl1 with a file /etc/testWMD, we want to copy to every hosts in ansibel ahd group.
This task is easily accomplished with a playbook with the following content:

- hosts: ahd
    - name: Transfer file from sysmgttl1 to hosts in the ahd group
        src: /etc/testWMD
        dest: /etc/testWMD
      delegate_to: sysmgttl1

Our playbook is called ehd_sync

# ansible-playbook ehd_sync
< PLAY [ahd] >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
< TASK [setup] >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
ok: [host1]
ok: [host2]
ok: [host3]
< TASK [Transfer file from ServerA to ServerB] >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

changed: [host1 -> sysmgttl1]
changed: [host3 -> sysmgttl1]
changed: [host2 -> sysmgttl1]
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

host1          : ok=2    changed=1    unreachable=0    failed=0
host2          : ok=2    changed=1    unreachable=0    failed=0
host3          : ok=2    changed=1    unreachable=0    failed=0

To check/validate:

# ansible -a "ls -l /etc/testWMD" aha
host1 | SUCCESS | rc=0 >>
-rw-r--r-- 1 root root 0 Oct  7 14:17 /etc/testWMD
host2 | SUCCESS | rc=0 >>
-rw-r--r-- 1 root root 0 Oct  7 14:17 /etc/testWMD
host3 | SUCCESS | rc=0 >>
-rw-r--r-- 1 root root 0 Oct  7 14:17 /etc/testWMD

Posted in Linux.

Tagged with , , , .

ansible cont.

This time, a user (wmd needs to have in his home directory (several hosts needs this treatment) a very specific file with an equally specific content and permisions. Ansible playbook code for today:

- hosts: jam

   - name: edit ~wmd/k5login
   - shell: echo 'E_wmd@CHOP.EDU' > /home/e_wmd/.k5login
   - file: path=/home/e_wmd/.k5login owner=e_wmd group=wmd mode=0644

To execute this spcific playbook:

# ansible-playbook wmd.yml

This effort was required by lack of consitency in user definition in AD…….Often you can fix something faster than the one who owns it….. 🙂

Posted in LINUX.

Tagged with , .

AIX/PowerHA – cannot remove caavg_private disk (change pvid of a disk)

another chapter in storage migration….. Previously XIV delivered all disks. Currently it is VSP. A VSP disk had to replace the caa private disk in order to complete the migration. This process, focused around the procedure from on of my previous posts (search this blog for “CAA_FORCE_ENABLED=1”) kept failing and eventually being limited by time all XIV disks were removed from cluster nodes. HA ODM still showed the original caa disks PVID when queried and the cluster would not sync or operate…..

The PVID from the new caa candidate disk was removed and replaced with the PVID of the original caa disk. Next, both nodes have been rebooted, cluster synced and the peace returned to the cluster!

# lspv
hdisk1          00f660fd7411a5f3              rootvg          active
hdisk0          00f660fdc7e49dad              rootvg          active
hdisk10         00f660fd67ab3d30              lawappqa_vg
hdisk11         00f660fd67ab42ed              lawappqa_vg
hdisk12         00f660f667cb1f64              None
hdisk13         00f660f667cb1e16              None
hdisk14         00f660fd67ab4665              lawappqa_vg
hdisk15         00f660fd67ab4916              lawappqa_vg
hdisk16         00f660f667cb13bf              None
hdisk17         00f660f667cb0f0e              None

The last disk (hdisk17) will be the new caa disk, we will clean its pvid (on all cluster nodes!).

# chdev -l hdisk17 -a pv=clear

The original disk pvid was “00f660fde083fb16”. Now it will be assigned to hdisk17 (done on the primary node).

# perl -e 'print pack("H*","00f660fde083fb16");' >/tmp/pvid
# cat /tmp/pvid | dd of=/dev/hdisk17 bs=1 seek=128
# rmdev -dl hdisk17
# shutdown -Fr

After the reboot (which really was not needed as cfgmgr could be used instead).

# lspv
hdisk1          00f660fd7411a5f3               rootvg          active
hdisk0          00f660fdc7e49dad               rootvg          active
hdisk10         00f660fd67ab3d30               lawappqa_vg
hdisk11         00f660fd67ab42ed               lawappqa_vg
hdisk12         00f660f667cb1f64               None
hdisk13         00f660f667cb1e16               None
hdisk14         00f660fd67ab4665               lawappqa_vg
hdisk15         00f660fd67ab4916               lawappqa_vg
hdisk16         00f660f667cb13bf               None
hdisk2          00f660fde083fb16               caavg_private   active
# /usr/es/sbin/cluster/utilities>./clmgr sync cluster

Notice that hdisk17 has morphed into hdisk2, which is normal. Now, let’s start the cluster and watch it run.

today, scouting the Web, I found another way of changing AIX disk PVID token – see bellow for a neat script (I have not tested it).

set -A a `echo $PVID | \
awk ' {
for (f=1; f <= length($0); f=f+2) { print "ibase=16\nobase=8\n"toupper(substr($0,f,2)) } }' |\ bc 2>/dev/null`
/usr/bin/echo "\0"${a[0]}"\0"${a[1]}"\0"${a[2]}"\0"${a[3]}"\0"${a[4]}"\0"${a[5]}"\0"${a[6]}"\0"${a[7]}"\0\0\0\0\0\0\0\0\c" | dd bs=1 seek=128 of=/dev/$DISK

Posted in LINUX.

Tagged with , , , .

Device eth0 does not seem to be present – RHEL7.2

While trying to get WIFI NIC running on a laptop its eth0 interface disappeared ….. Device eth0 does not seem to be present. This host, while built by KickStart had its NIC labeled as ETH0 which is not the “native” way for RHEL7 so after a while of fruitless efforts I started to look for it under a different name.

To determine location code of all network devices:

#  lspci | grep -i net
00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03)
03:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300

To locate these devices

# cd /sys/class/net
# ls -la
total 0
lrwxrwxrwx  1 root root 0 Sep 28 08:23 docker0 -> ../../devices/virtual/net/docker0
lrwxrwxrwx  1 root root 0 Sep 28 08:13 enp0s25 -> ../../devices/pci0000:00/0000:00:19.0/net/enp0s25
lrwxrwxrwx  1 root root 0 Sep 28 08:13 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx  1 root root 0 Sep 28 08:13 virbr0 -> ../../devices/virtual/net/virbr0
lrwxrwxrwx  1 root root 0 Sep 28 08:13 virbr0-nic -> ../../devices/virtual/net/virbr0-nic
lrwxrwxrwx  1 root root 0 Sep 28 08:13 wls1 -> ../../devices/pci0000:00/0000:00:1c.1/0000:03:00.0/net/wls1

The last listing indicates that eth0 is now called enp0s25.

# cd /etc/sysconfig/*scripts
# mv ifcfg-eth0 ifcfg-enp0s25
# systemctl restart network

This fixes it and the host is again accessible from the “outside” 🙂

Posted in LINUX.

Spacewalk2.2 db password expired

Satellite 2.2 servers started to send mails containing the following text:

Frame initDB in /usr/lib/python2.6/site-packages/spacewalk/server/rhnSQL/ at line 117
username = spacewalk
e = (28001, 'ORA-28001: the password has expired\n', 'spacewalk@//localhost/spacedb', 'Connection_Connect(): begin session')

It seems that the “spacewalk” user password has expired. Follow the text bellow to validate it and eventually to change it.

# su - oracle
$ sqlplus / as SYSDBA
SQL> select username, account_status, created, lock_date, expiry_date
 from dba_users
where account_status != 'OPEN';
EXPIRED                          12-MAR-16           15-SEP-16

SQL> alter user spacewalk identified by abc123;

SQL> select username, account_status, created, lock_date, expiry_date
 from dba_users
where account_status = 'OPEN';

-------------------------------- --------- --------- ---------
OPEN                             12-MAR-16           25-MAR-17
SQL> bye

Posted in LINUX.

Tagged with , , , , .

AD/KRB5 authentication issues (unexpected) with RedHat 7.2

For some unknown reason a few freshly added users could not login to a freshly built RedHat host. Too much fresh? The host has been COBBLER built and so what is going on?
This is what is recorded in /var/log/secure showing the failed login attempt:

Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=  user=wmduszyk
Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_sss(sshd:auth): authentication success; logname= uid=0 euid=0 tty=ssh ruser= user=wmduszyk
Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_krb5[2397]: account checks fail for 'WMDUSZYK@WMD.EDU': user disallowed by .k5login file for 'wmduszyk'
Sep  8 13:57:56 bctpxypl1 sshd[2397]: Failed password for wmduszyk from port 58191 ssh2
Sep  8 13:57:56 bctpxypl1 sshd[2397]: fatal: Access denied for user wmduszyk by PAM account configuration [preauth]
Sep  8 13:59:49 bctpxypl1 su: pam_unix(su-l:session): session closed for user wmduszyk

I am flabbergasted! The host has all the latest patches, and everybody else can login! After a short search on the web I add a paragraph to /etc/krb5.conf containing the ignore_k5login = true phrase and the login issues are gone!

Here is the file /etc/krb5.conf as it is now.

 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = WMD.EDU
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
default_tgs_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
default_tkt_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
permitted_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96

 WMD.EDU = {
  admin_server = KERBEROS.WMD.EDU

[domain_realm] = WMD.EDU = WMD.EDU

 pam = {
  debug = false
  WMD.EDU = {
   ignore_k5login = true

Posted in LINUX.

recovering from san migrations errors, AIX

One AIX host is using XIV storage for its users volume groups. The data from these vgs has to be migrated to the disks provided by HITACHI SAN. This is a trivial task already done and repeated hundreds of times. Get disks from the other SAN, mirror everything, wait for logical volumes to sync, drop XIV “mirrors”, remove HIV disks from volume groups. remove XIV drivers, install HITACHI drivers, reboot and be marry.

Occasionally, san administrator will remove disks (luns) to migrate from before you can drop the mirrors these disks belong to ….. Luckily, he does it after the mirrors are already synced ….

Reboot – if you do not know what do to next. After a reboot, no user volume group (rootvg has SAS disks) will be able to come on-line (be varied on), even with “force”. But this situation is really not as bad as it looks. Make note what disks belong to what volume group, export the vgs and imports back with “force”. The following documents the way out.

# exportvg mksysbvg
# importvg -f -y mksysbvg hdisk3
# exportvg devegate_vg
# importvg -f -y devegate_vg hdisk2
# mount all

The two disks above are the ones from the new SAN (HITACHI).

Posted in LINUX.

Tagged with , , , , , .

using ansible to modify /etc/sudoers

Membership in a user group defined in /etc/sudoers needs to be changed across a large number of hosts, which make is an ideal situation to learn how to do it with ansible!
Here is my playbook. There are three tasks. The first one looks for a very specific definition of the user alias called DBAS. This is what this entry should be on all hosts! The search result (true or false) is stored in the variable called appropriately – grep_result. Our playbook is instructed to run even on failure of the last grep command.

The second task, makes a copy of /etc/sudoers only when failure of the previous grep in anticipation of the change delivered by the next task, which calls on sed to replace the existing alias with the new one. This task just like the one before it is executed only when the user alias is found to be different from the new one.

- hosts: idm
  remote_user: root
   - name: check if sudoers has the correct definition
     command: grep 'User_Alias DBAS = pricea, hankeej, swensong, hanfrordm, santinejj' /etc/sudoers
     register: grep_result
     ignore_errors: true
   - name: copy /etc/sudoers to /etc/sudoers.wmd
     copy: src=/etc/sudoers dest=/etc/sudoers.wmd force=no
     when: grep_result|failed
   - name: insert the new 'User_Alias DBAS' definition
     shell: sed --in-place 's/User_Alias DBAS =.*/User_Alias DBAS = pricea, hankeej, swensong, hanfrordm, santinejj/' /etc/sudoers
     when: grep_result|failed

To verify results of this playbook action, one could execute the following line.

# ansible idm -a "grep 'User_Alias DBAS' /etc/sudoers"

Posted in LINUX.

Tagged with , .

starting something new – ansible

I am learning Ansible and this is my first playbook which job is to deploy a user in an interactive fashion. The login name and groups membership must be entered.

      1 ---
      2 - hosts: localhost
      3   remote_user: root
      5   vars_prompt:
      7      - name: "login"
      8        prompt: "login name"
      9        private: no
     11      - name: "group"
     12        prompt: "primary group"
     13        private: no
     15      - name: "extra_groups"
     16        prompt: "additional groups"
     17        private: no
     19   tasks:
     21      - name: check if user exists in /etc/passwd
     22        shell: /usr/bin/grep -q {{ login }} /etc/passwd
     23        register: local_login
     24        ignore_errors: True
     26      - name: check groups
     27        shell: /usr/bin/grep -q {{ item }} /etc/group
     28        with_items: "{{ extra_groups.split(',') }}"
     29        ignore_errors: False
     30        when: local_login|failed
     32      - name: create new user
     33        user:
     34          name="{{ login }}"
     35          group="{{ group }}"
     36          groups="{{ extra_groups }}"
     37          comment="New User called {{ login }}"
     38          password='$1$some_pla$/W4Aou.rEdpJKYX5nwOPX.'
     39          createhome=yes
     40          generate_ssh_key=yes
     41        register: user_created
     42        when: local_login|failed
     44      - name: force user to change password at the first login
     45        shell: /usr/bin/chage -d 0 {{ login }}
     46        when: user_created.changed

The first line contains the mandatory set of dashes identifying the file to YAML interprator. The next two line identify what host this playbook will be applied (localhost) and who will be executing its contents (root). Line 5 defines the block of questions and answers, which will be used to set appropriate variables storing the login name, primary and additional groups. The private:no directive allows the text you type in to stay visible. On the other hand, if you were asking for a password you could use private:true to prevent displaying it on the screen as it is entered.

The task on line 21 checks if the entered login name is already present in the /etc/passwd file using Ansible shell module calling awk against the /etc/passwd file and placing the return value in the variable called local_login. This task is instructed not to fail the play even when the user is not found as for us this is the desired state! We want the playbook to stop executing when a user is already present! This is accomplished by the ignore_errors: True statement.

Next, the entered groups (in a comma delimited fashion) are validated against the /etc/group but only on the failure of the previous task. This time, the situation is different. We want to fail in the case one of the provided groups is not defined in the local /etc/group file – ignore_errors: False.
“Looping” over the list of additional groups is done via the with_items: "{{ extra_groups.split(',') }}" entry, which identifies the delimiter as the comma character. This block is set to fail in the case one of the provided groups is not locally defined.

After the login name is not found in the local /etc/passwd file and all groups are accounted for the user is created calling on the built in module – user:. The outcome of this action is stored in the variable user_created, which value is used to or not to age the password forcing its change at the first login.

Posted in LINUX.

Copyright © 2016 Waldemar Mark Duszyk. All Rights Reserved. Created by Blog Copyright.