Skip to content

Spacewalk2.2 db password expired

Satellite 2.2 servers started to send mails containing the following text:

Frame initDB in /usr/lib/python2.6/site-packages/spacewalk/server/rhnSQL/ at line 117
username = spacewalk
e = (28001, 'ORA-28001: the password has expired\n', 'spacewalk@//localhost/spacedb', 'Connection_Connect(): begin session')

It seems that the “spacewalk” user password has expired. Follow the text bellow to validate it and eventually to change it.

# su - oracle
$ sqlplus / as SYSDBA
SQL> select username, account_status, created, lock_date, expiry_date
 from dba_users
where account_status != 'OPEN';
EXPIRED                          12-MAR-16           15-SEP-16

SQL> alter user spacewalk identified by abc123;

SQL> select username, account_status, created, lock_date, expiry_date
 from dba_users
where account_status = 'OPEN';

-------------------------------- --------- --------- ---------
OPEN                             12-MAR-16           25-MAR-17
SQL> bye

Posted in LINUX.

Tagged with , , , , .

AD/KRB5 authentication issues (unexpected) with RedHat 7.2

For some unknown reason a few freshly added users could not login to a freshly built RedHat host. Too much fresh? The host has been COBBLER built and so what is going on?
This is what is recorded in /var/log/secure showing the failed login attempt:

Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=  user=wmduszyk
Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_sss(sshd:auth): authentication success; logname= uid=0 euid=0 tty=ssh ruser= user=wmduszyk
Sep  8 13:57:56 bctpxypl1 sshd[2397]: pam_krb5[2397]: account checks fail for 'WMDUSZYK@WMD.EDU': user disallowed by .k5login file for 'wmduszyk'
Sep  8 13:57:56 bctpxypl1 sshd[2397]: Failed password for wmduszyk from port 58191 ssh2
Sep  8 13:57:56 bctpxypl1 sshd[2397]: fatal: Access denied for user wmduszyk by PAM account configuration [preauth]
Sep  8 13:59:49 bctpxypl1 su: pam_unix(su-l:session): session closed for user wmduszyk

I am flabbergasted! The host has all the latest patches, and everybody else can login! After a short search on the web I add a paragraph to /etc/krb5.conf containing the ignore_k5login = true phrase and the login issues are gone!

Here is the file /etc/krb5.conf as it is now.

 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = WMD.EDU
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
default_tgs_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
default_tkt_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
permitted_enctypes = rc4-hmac aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96

 WMD.EDU = {
  admin_server = KERBEROS.WMD.EDU

[domain_realm] = WMD.EDU = WMD.EDU

 pam = {
  debug = false
  WMD.EDU = {
   ignore_k5login = true

Posted in LINUX.

recovering from san migrations errors, AIX

One AIX host is using XIV storage for its users volume groups. The data from these vgs has to be migrated to the disks provided by HITACHI SAN. This is a trivial task already done and repeated hundreds of times. Get disks from the other SAN, mirror everything, wait for logical volumes to sync, drop XIV “mirrors”, remove HIV disks from volume groups. remove XIV drivers, install HITACHI drivers, reboot and be marry.

Occasionally, san administrator will remove disks (luns) to migrate from before you can drop the mirrors these disks belong to ….. Luckily, he does it after the mirrors are already synced ….

Reboot – if you do not know what do to next. After a reboot, no user volume group (rootvg has SAS disks) will be able to come on-line (be varied on), even with “force”. But this situation is really not as bad as it looks. Make note what disks belong to what volume group, export the vgs and imports back with “force”. The following documents the way out.

# exportvg mksysbvg
# importvg -f -y mksysbvg hdisk3
# exportvg devegate_vg
# importvg -f -y devegate_vg hdisk2
# mount all

The two disks above are the ones from the new SAN (HITACHI).

Posted in LINUX.

Tagged with , , , , , .

using ansible to modify /etc/sudoers

Membership in a user group defined in /etc/sudoers needs to be changed across a large number of hosts, which make is an ideal situation to learn how to do it with ansible!
Here is my playbook. There are three tasks. The first one looks for a very specific definition of the user alias called DBAS. This is what this entry should be on all hosts! The search result (true or false) is stored in the variable called appropriately – grep_result. Our playbook is instructed to run even on failure of the last grep command.

The second task, makes a copy of /etc/sudoers only when failure of the previous grep in anticipation of the change delivered by the next task, which calls on sed to replace the existing alias with the new one. This task just like the one before it is executed only when the user alias is found to be different from the new one.

- hosts: idm
  remote_user: root
   - name: check if sudoers has the correct definition
     command: grep 'User_Alias DBAS = pricea, hankeej, swensong, hanfrordm, santinejj' /etc/sudoers
     register: grep_result
     ignore_errors: true
   - name: copy /etc/sudoers to /etc/sudoers.wmd
     copy: src=/etc/sudoers dest=/etc/sudoers.wmd force=no
     when: grep_result|failed
   - name: insert the new 'User_Alias DBAS' definition
     shell: sed --in-place 's/User_Alias DBAS =.*/User_Alias DBAS = pricea, hankeej, swensong, hanfrordm, santinejj/' /etc/sudoers
     when: grep_result|failed

To verify results of this playbook action, one could execute the following line.

# ansible idm -a "grep 'User_Alias DBAS' /etc/sudoers"

Posted in LINUX.

Tagged with , .

starting something new – ansible

I am learning Ansible and this is my first playbook which job is to deploy a user in an interactive fashion. The login name and groups membership must be entered.

      1 ---
      2 - hosts: localhost
      3   remote_user: root
      5   vars_prompt:
      7      - name: "login"
      8        prompt: "login name"
      9        private: no
     11      - name: "group"
     12        prompt: "primary group"
     13        private: no
     15      - name: "extra_groups"
     16        prompt: "additional groups"
     17        private: no
     19   tasks:
     21      - name: check if user exists in /etc/passwd
     22        shell: /usr/bin/grep -q {{ login }} /etc/passwd
     23        register: local_login
     24        ignore_errors: True
     26      - name: check groups
     27        shell: /usr/bin/grep -q {{ item }} /etc/group
     28        with_items: "{{ extra_groups.split(',') }}"
     29        ignore_errors: False
     30        when: local_login|failed
     32      - name: create new user
     33        user:
     34          name="{{ login }}"
     35          group="{{ group }}"
     36          groups="{{ extra_groups }}"
     37          comment="New User called {{ login }}"
     38          password='$1$some_pla$/W4Aou.rEdpJKYX5nwOPX.'
     39          createhome=yes
     40          generate_ssh_key=yes
     41        register: user_created
     42        when: local_login|failed
     44      - name: force user to change password at the first login
     45        shell: /usr/bin/chage -d 0 {{ login }}
     46        when: user_created.changed

The first line contains the mandatory set of dashes identifying the file to YAML interprator. The next two line identify what host this playbook will be applied (localhost) and who will be executing its contents (root). Line 5 defines the block of questions and answers, which will be used to set appropriate variables storing the login name, primary and additional groups. The private:no directive allows the text you type in to stay visible. On the other hand, if you were asking for a password you could use private:true to prevent displaying it on the screen as it is entered.

The task on line 21 checks if the entered login name is already present in the /etc/passwd file using Ansible shell module calling awk against the /etc/passwd file and placing the return value in the variable called local_login. This task is instructed not to fail the play even when the user is not found as for us this is the desired state! We want the playbook to stop executing when a user is already present! This is accomplished by the ignore_errors: True statement.

Next, the entered groups (in a comma delimited fashion) are validated against the /etc/group but only on the failure of the previous task. This time, the situation is different. We want to fail in the case one of the provided groups is not defined in the local /etc/group file – ignore_errors: False.
“Looping” over the list of additional groups is done via the with_items: "{{ extra_groups.split(',') }}" entry, which identifies the delimiter as the comma character. This block is set to fail in the case one of the provided groups is not locally defined.

After the login name is not found in the local /etc/passwd file and all groups are accounted for the user is created calling on the built in module – user:. The outcome of this action is stored in the variable user_created, which value is used to or not to age the password forcing its change at the first login.

Posted in LINUX.

adding swap space – RH7.x

a. stop swap activity on the existing swap partition or partitions

# swapoff -v /dev/mapper/rootvg-swap

create new swap volume (this host is not using fdisk partitions but logical volume manager), give it 2GB.

# lvcreate rootvg -n swap-vol2 -L 2G

turn it into swap

# mkswap /dev/mapper/swap-vol2

activate it

# swapon -v /dev/rootvg/swap-vol2

edit /etc/fstab to so it comes back every time the hosts is booted.

# echo "/dev/mapper/rootvg-swap-vol2 swap                swap    defaults        0 0" >> /etc/fstab

Posted in LINUX.

Tagged with , , , , , .

deploying ssh keys to remote hosts

I have discovered Ansible and as the result, I have to deploy ssh keys to a few hundred UNIX/LINUX boxes….. Yes, there is the ssh-copy-id which is fine for one machine but this is not going to work for me. The idea of repetitively entering root password is making me sick….. Is there anything else….?
Yes, there is actually more than one way to finish this task. There is the sshpass that you download from EPEL which works with ssh-copy-id like that:

# sshpass -f pass.txt ssh-copy-id -i ~root/.ssh/ target_host_name

where pass.txt file contains the root password, and the target_host_name is the destination host name.

But there is even a better way! Thanks to Travis Bear who created ssh-deploy-key. You can learn more about it following this link

This is an excerpt from Travis doc’s:

Here is Travis comparison of ssh-deploy-key with some other common ways to deploy a key.

“Deploying ssh keys by hand is a time-honored technique that in general works pretty well. However, in almost all cases, using ssh-deploy-key is a better option. It’s faster, easier, more reliable, and more repeatable. When deploying to more than one host at a time, these advantages only multiply with ssh-deploy-key’s bulk deployment abilities. There is one use case where deploying by hand is a better bet: when the remote host is on a different network, behind a jump box. ssh-deploy-key does not handle that scenario.

ssh-copy-id is a great tool, but it’s not the ideal solution for every scenario.
ssh-copy-id is not installed by default on all systems, notably on Mac OS.
ssh-copy-id has no concept of ‘smart append’. It will append a key to a remote host’s authorized keys file regardless of whether that key is already present.
• Scripting the use of ssh-copy-id for deploying to multiple remote hosts can be challenging:

The password is entered interactively for each host. In the case where there are numerous remote hosts that have not seen before, you’d need to interactively allow each host to be added to your known_hosts file.

Configuration management tools (like Puppet, Ansible, etc.) can do a terrific job deploying ssh key(s). But if you are not already set up to use them for key distribution, these general-purpose solutions can be overkill, especially when compared with a dedicated tool like ssh-deploy-key that only does one thing.”

To install this utility requires two steps:

# yum -y install python-pip python-devel
# pip install ssh-deploy-key

The spacecmd system_list command (Satellite/Spacewalk) generated all hosts names which where collected inside the HOSTS file. The actual processing the list of hosts was done extremely easy – "ssh-deploy-key -d < HOSTS"

The -d flag could be a very important one to remember. Without it the target host /root/.ssh/authorized_hosts file will be overwritten - every host already defined there will be gone!!!

Posted in LINUX.

switching groups and mixed case logins with sssd

Several users, each with its own group set needs to collaborate on as data located in a folder which is owned by a yet different group. The solution is to add the data folder group to each users group set and next executing the gpasswd command set a password on the “data” group.
To be able to “enter” the data folder a user sets its group as his/her primary group executing the newgrp - group_name command, enters the group password and move to the data folder.

To force LINUX with SSSD to always display login names in lowercase regardless of the format used in Active Directory you have to have this entry in your /etc/sssd/sssd.conf file

case_sensitive = False 

Edit this file, restart the sssd service and clean its cache with sss_cache -E command and you are as good as new.

Posted in LINUX.

migrating from “THICK” to “THIN” vmware disks with pvmove

I have a linux guest that one disk is set as THICK (by a mistake or an act of God). It is claimed by a volume group with one almost full logical_volume/file_system. I need to change the storage type (thin) and simultaneously provide more capacity to accommodate the constant growth of data.
There are numerous ways to resolve this situation. For example, to change the disk type one could use the VMotion (assuming one has a license) or vmkfstools. Next, the converted disk could be “grown” to the required capacity with vmtools.
Or you could create an new thinly provisioned disk of the required capacity and add it to the volume group and move the data either via a mirror or relocation of physical partitions. Finally, remove the “thick” disk from its volume group and from the guest.
In this case the THICK disk is /dev/sdb and the new THIN one is /dev/sdc
The logical volume is defined as zoom_vg-zoom_lv

# vgextend zoom_vg /dev/sdc
# pvmove -n zoom_vg-zoom_lv /dev/sdb /dev/sdc
  Detected pvmove in progress for /dev/sdb
  Ignoring remaining command line arguments
  /dev/sdb: Moved: 11.1%
  /dev/sdb: Moved: 12.5%
  /dev/sdb: Moved: 13.9%
  /dev/sdb: Moved: 15.4%
  /dev/sdb: Moved: 98.4%
  /dev/sdb: Moved: 99.9%
  /dev/sdb: no pvmove in progress - already finished or aborted.

During the migration, one can use the lvs command to gauge its progress.

# lvs -a
  LV        VG      Attr       LSize     ......... Log Cpy%Sync
  lv_home   vg_sys  -wi-ao----   1.95g
  lv_root   vg_sys  -wi-ao----  10.84g
  lv_swap   vg_sys  -wi-ao----   3.91g
  lv_temp   vg_sys  -wi-ao----   3.91g
  lv_usr    vg_sys  -wi-ao----   7.91g
  lv_var    vg_sys  -wi-ao----   5.91g
  [pvmove0] zoom_vg p-C-aom--- 199.00g     /dev/sdb     48.20
  zoom_lv   zoom_vg -wI-ao---- 199.00g

# lvs -a | grep pvmove
  [pvmove0] zoom_vg p-C-aom--- 199.00g     /dev/sdb     49.34

# lvs -a | grep pvmove
  [pvmove0] zoom_vg p-C-aom--- 199.00g     /dev/sdb     49.62

When pvmoce finishes, we drop /dev/sdb from its volume group.

# vgreduce zoom_vg /dev/sdb
  Removed "/dev/sdb" from volume group "zoom_vg"
# pvs
  PV         VG      Fmt  Attr PSize   PFree
  /dev/sda2  vg_sys  lvm2 a--   34.61g 196.00m
  /dev/sdb           lvm2 ---  200.00g 200.00g
  /dev/sdc   zoom_vg lvm2 a--  300.00g 101.00g

To finish, we need to remove it form the guest definition in VMWare and the “thick” is finally gone.

Posted in LINUX.

vmare, snapshots, etc

Before Oracle Linux hosts were unregistered from ULN and registered with an internal SpaceWalk system a snapshot was made. Now, a month later there are a large number of snapshots that have to be removed.
It is very easy to find these hosts (VMWare guests) – their names start with “EIE”.

PowerCLI in action…..

To list selected guests and their snapshots execute the following command:

get-vm | where {$ -match "EIE"} | Get-Snapshot | format-list vm,name
VM   : EIExxxx2
Name : ULN_Registered
VM   : EIEyyyy1
Name : ULN_Registered
VM   : EIEqqqq2
Name : ULN_Registered
VM   : EIEwwwww3
Name : ULN_Register

To delete a snapshot with conformation:

get-vm | where {$ -match "EIE"} | Get-Snapshot | Remove-Snapshot 

You will be asked to verify that you really mean it and only after you say “Yes” the selected snapshot will be removed.

To remove all snapshots without being asked to confirm, execute:

get-vm | where {$ -match "EIE"} | Get-Snapshot | Remove-Snapshot -Confirm:$false

Posted in LINUX.

Tagged with , , .

Copyright © 2016 Waldemar Mark Duszyk. All Rights Reserved. Created by Blog Copyright.