Skip to content


execute commands from RH Satellite on its clients

….using osad services. To set it up follow these steps.

On your Satellite server:

a. load the dispatcher rpm.

# yum -y install osa-dispatcher

b. check it in.

# chkconfig osa-dispatcher on

c. start it so you can use it right away.

# service osad-dispatcher start

On your RH Satellite client (host) side:

a. make sure it subscribes to the RHN Tools child channel of your given release

b. install these two packages (the scheduler client and RHN enabler)

# yum -y install osad rhncfg-actions

c. enable remote command execution. The next steps let you visualize this process

[root@rcpsqlpl1 ~]# rhn-actions-control --report
deploy is disabled
diff is disabled
upload is disabled
mtime_upload is disabled
run is disabled
# rhn-actions-control --enable-run
# rhn-actions-control --report
deploy is disabled
diff is disabled
upload is disabled
mtime_upload is disabled
run is enabled
# rhn-actions-control --enable-all
[]# rhn-actions-control --report
deploy is enabled
diff is enabled
upload is enabled
mtime_upload is enabled
run is enabled

d. check in osad service so it starts on boot.

# chkconfig osad on

e. start it now so you can use its services immediatelly.

# service osad start

Posted in Real life AIX.

Tagged with , .


to build AIX SystemMirror 7.1 from command line

Attempts to migrate from SystemMirror 6.1 to 7.1 failed, were forgotten for a long time and now the “cluster” has to happen. Before this cluster got forgotten, its nodes were “messed” up by different admins attempting to do “magic” that never worked. Today, both nodes experience cleansing aka removal of the existing cluster definitions, filesets, unification and cleaning of /etc/hosts, /etc/cluster/rhosts and were assigned proper and consistent node/unames/hostnames. The names are very important and should never be changed after cluster built, really! This post does not document cluster upgrade via “migration” – this is a brand new installation/cluster configuration process. We installed SystemMirror 7.1.3.0 followed with the upgrade_all to 7.1.3.2. Leather, rinse and repeat on the other node. When done, reboot both nodes.

When a cluster node boots it get assigned an IP address that we call its “boot IP”. Each cluster node has one unique boot IP from the same network. Each node can ping its partner node in the future cluster (this requires using router assigned to the cluster service address network). I use the short label in the /etc/hosts associated with the boot address is used as the node name. The last statement means that the hostname -s and uname commands will generate the same output. For this case it will be node2b and node1b respectivelly.

The /etc/hosts file was formatted following the format required by SystemMirror) which in my case looks like this:

127.0.0.1       loopback localhost
10.254.245.72   node1b.wmd.edu   node1b
10.254.245.73   node2b.wmd.edu   node2b
10.19.80.108    node2
10.19.80.31     node1
10.19.80.112    egtapqu002      # cluster Service IP

Do you see the numerics, fully qualified hostname and finally its short version. You do not need to follow this rule for the non-bootable entries. Our cluster will contain two nodes node1b and node2b. The non routable, "bootable" network is use to pass Ethernet packets, heartbeats, etc. So in order for clients to login to these hosts we have to assign to each node network interface an IP alias from a "routable" network. We will assign cluster "service" address to the same routable network as the two aliases

After software installations are done, nodes rebooted, their hostname/uname verified (node1b and node2b) we need to edit /etc/cluster/rhosts so it only contains the bootable IP addresses and the ones associated with nodes aliases:

10.254.245.72
10.254.245.73
10.19.80.108
10.19.80.31

At this time, it is also a good idea to edit /usr/es/sbin/cluster/netmon.cf so it contains addresses of external routers, time servers, etc. The "something" should live on different networks that the one used by our aliases and the service address.

Let's the fun begin! We will build our new cluster from command line. Create a two node cluster called egtaptq2 using hdisk8 as the pass-through/heartbeat disk.

node1b:TP:/> clmgr add cluster egtaptq2 \
                    nodes=node1b,node2b \
                    type=NSC \
                    heartbeat_type=unicast \
                    repository_disk=hdisk8

Cluster Name:    egtaptq2
Cluster Type:    Stretched
Heartbeat Type:  Multicast
Repository Disk: None
Cluster IP Address: None

There are 2 node(s) and 1 network(s) defined

NODE node2b:
        Network net_ether_01
        node2b    10.254.245.73

NODE node1b:
        Network net_ether_01
        node1b    10.254.245.72
.............

Let check if this is a STREATCHED cluster. Executing /usr/es/sbin/cluster/utilities/cltopinfo delivers:

Cluster Name:    egtaptq2
Cluster Type:    Standard
Heartbeat Type:  Unicast
Repository Disk: hdisk7 (00f660fd661b72a4)

There are 2 node(s) and 1 network(s) defined

NODE node2b:
        Network net_ether_01
        node2b    10.254.245.73

NODE node1b:
        Network net_ether_01
        node1b    10.254.245.72

No resource groups defined

So the cluster has been indeed created as a Standard one with UNICAST as the broadcasting mode, nice. Executing lspv, I could not see the caavg_private volume group built on the selected heartbeat disk so I requested cluster synchronization.

node1b:TP:/>clmgr sync cluster 

When the sync finished the cluster “heartbeat” group was in.

node1b:TP:/>lspv | grep caa
hdisk7          00f660fd661b72a4    caavg_private   active

Time to do the persistent addresses (aliases on the routable network).

node1b:TP:/>clmgr add persistent_ip 10.19.80.31 \
                  network=net_ether_01 \
                  node=node1b

node1b:TP:/>clmgr add persistent_ip 10.19.80.108 \
                 network=net_ether_01 \
                 node=node2b

Validate our action:

node1b:TP:/>clmgr -v q pe
NAME="node2"
IPADDR="10.19.80.108"
NODE="node2b"
NETMASK="255.255.255.0"
NETWORK="net_ether_01"
INTERFACE=""
NETTYPE="ether"
TYPE="persistent"
ATTR="public"
GLOBALNAME=""
HADDR=""

NAME="node1"
IPADDR="10.19.80.31"
NODE="node1b"
NETMASK="255.255.255.0"
NETWORK="net_ether_01"
INTERFACE="en0"
NETTYPE="ether"
TYPE="persistent"
ATTR="public"
GLOBALNAME=""
HADDR=""

Reboot again and watch each node come back with two addresses on its en0 interface - the bootable and the routable alias. At this time, you do not HMC to login to these nodes.

Not much left, define application controller (the holder of the start/stop scripts)

node1b:TP:/>clmgr add application_controller egate2Ctrl \
     startscript=/usr/es/sbin/cluster/scripts/start_cluster.ksh \
     stopscript=/usr/es/sbin/cluster/scripts/stop_cluster.ksh

and validate it.

node1b:TP:/>clmgr -v q ac
NAME="egate2Ctrl"
MONITORS=""
STARTSCRIPT="/usr/es/sbin/cluster/scripts/start_cluster.ksh"
STOPSCRIPT="/usr/es/sbin/cluster/scripts/stop_cluster.ksh"

Let's create a resource group called egtqa2RG:

node1b:TP:/>clmgr add resource_group egtqa2RG \
                  nodes=node1b,node2b \
                  startup=OHN \
                  fallover=FNPN \
                  fallback=NFB \
                  service_label=egtapqu002 \
                  applications=egate2Ctrl \
                  volume_group=egtapq002_vg \
                  fs_before_ipaddr=true

and validate it.

node1b:TP:/root>clmgr -v q rg
NAME="egtqa2RG"
CURRENT_NODE=""
NODES="node1b node2b"
STATE="UNKNOWN"
TYPE=""
APPLICATIONS="egate2Ctrl"
STARTUP="OHN"
FALLOVER="FNPN"
FALLBACK="NFB"
NODE_PRIORITY_POLICY="default"
NODE_PRIORITY_POLICY_SCRIPT=""
NODE_PRIORITY_POLICY_TIMEOUT=""
DISK=""
VOLUME_GROUP="egtapq002_vg"
FORCED_VARYON="false"
FILESYSTEM=""
FSCHECK_TOOL="fsck"
RECOVERY_METHOD="sequential"
........

At this stage, we synchronize the cluster and check its status.

node2b:VF:/root>clmgr -cv -a name,state qe node
# NAME:STATE
node2b:OFFLINE
node1b:OFFLINE

To start the cluster on node2

node1b:TP:/root>clmgr start cluster when=now \
          manage=manual clinfo=true fix=interactive
....
node1b: Feb  6 2015 08:06:55
node1b: Completed execution of /usr/es/sbin/cluster/etc/rc.cluster
node1b: with parameters: -boot -N -M -b -i -C interactive -P cl_rc_cluster.
node1b: Exit status = 0

Start resource group on the other node

node1b:TP:/root> clmgr online resource_group egtqa2RG node=node2b

Where is our resource group?

node1b:TP:/root> clmgr -cv -a name,state,current_node q rg
# NAME:STATE:CURRENT_NODE
egtqa2RG:ONLINE:node2b

Take our resource grop down:

node2b:VF:/root>clmgr offline resource_group egtqa2RG node=node2b
Attempting to bring group egtqa2RG offline on node node2b.
Waiting for the cluster to process the resource group movement request....
Waiting for the cluster to stabilize......

Well, this is all for now. I cannot do the failover because only one node has all the required resources. Soon, we will get more memory and cpus so the other node will match its partner and we will be able to put the cluster through its paces. Till then, enjoy your weekend - it is getting close :-)

Posted in Real life AIX.

Tagged with , , .


IntegrityError: null value in column “package_id” violates not-null constraint

This messages was taken from an email recived from a RedHat Satellite server…. Apparently Satellite has issues synchonizing its packages with RHN.
This issue is caused due to a bad satellite-sync cache on the satellite server. First, try to clear the cache as showned below and then sink a specific satellite channel or all of its channels at once:

  # rm -rf /var/cache/rhn/satsync/* 

This step takes a while to run. When its done, once again sync the Red Hat Satellite server specific channel:

  # satellite-sync -c <channel-label> 

or all of its channels at once:

  # satellite-sync 

Posted in Linux, Real life AIX.

Tagged with , , .


Building SystemMirror 7.1 cluster

Many months ago, I attempted to migrate from 6.1 5o 7.1 and all my attempts failed one by one. The cluster would not sync and there seem to be no way to get it up and running. Eventually, I gave up, put it on a “back burner”. A week ago I had time and the nodes in the “cluster” had time for another shot.

I started by removing cluster definition (using smitty hacmp) followed by removal of all “hacmp cluster” related file sets. Next I installed them again (SystemMirror 7.1.3.0) followed with upgrade_all using the 7.1.3.2 code. At this time, I and re-created /etc/hosts and /etc/cluster/rhosts, I made sure they were identical on each node.

Each node had the same hostname/uname as the label in /etc/hosts associated with it boot IP address. Next, since our bootable addresses are not routable, each node received an IP alias on the same network as the “service” address followed with setting the gateway address on the bootable interfaces. Yes, both nets use the same netmask!

Reboot both nodes start clcomd and configure the cluster. It took a few sync failures before the Sun start shining in my neck of the woods. Few first sync's failed with no apparent reason (asking to contact IBM…) but I noticed that there was no heartbeat volume group and the associated with it file system aka caavg_private. What helped is shown bellow. You guessed it – it was executed on each node.

# export CAA_FORCE_ENABLED=1
# rmcluster -f -r hdisk9
# rmcluster -f -r hdisk10
# rmdev -dl hdisk10
# rmdev -dl hdisk9
# cldare -rtV normal

# shutdown -Fr

Originally (and the plan still has not been changed), I set hdisk10 as the heartbeat disk and hdisk9 as its backup so I went back to the cluster configuration menu and set them up again.

The next sync almost succeeded but it failed as an entry was found missing in the /etc/snmpv3.conf file. Why in this file? I had no idea – we use snmp ver.2. But I followed and added the missing entry. Here it is:

smux 1.3.6.1.4.1.2.3.1.2.1.5 clsmuxpd_password

Another sync, which took really long and the long awaited OK prompt showed up! Next week, I have another 7.1.3.2 cluster to build.
But this time I will try to set it up via a command line alone – I have never done it, it should be fun!
:-)

Posted in HACMP, Real life AIX.

Tagged with , , .


to reboot RedHat host in a future

it could be done with the at command or with crontab -e executed as root or using the plain old reboot provided with the appropriate time, like for example

#  nohup shutdown -r 13:00 &

..................
The system is going down for reboot in 90 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:00 ...

The system is going down for reboot in 60 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:15 ...

The system is going down for reboot in 45 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:30 ...

The system is going down for reboot in 30 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:45 ...

The system is going down for reboot in 15 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:51 ...

The system is going down for reboot in 9 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:52 ...

The system is going down for reboot in 8 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:53 ...

The system is going down for reboot in 7 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:54 ...

The system is going down for reboot in 6 minutes!

The messages might be a bit annoying if not redirected but on the other hand they might server as a reminder….?

Posted in LINUX.

Tagged with , , , .


RedHat security patching for AIX administrator

Hi, this is a “reprint” from RedHat Knowledge base” that today I had found very useful.

• Red Hat Enterprise Linux 6.x
• Red Hat Enterprise Linux 5.1 and later
• Red Hat Network Hosted
• Red Hat Satellite

,Resolution

• Install the yum-security plugin. It is now possible to limit yum to install only security updates (as opposed to bug fixes or enhancements) using Red Hat Enterprise Linux 5 and 6. To do so, simply install the yum-security plugin:

For Red Hat Enterprise Linux 6

 # yum install yum-plugin-security

For Red Hat Enterprise Linux 5

 # yum install yum-security

Alternatively, download the yum-security package from the Red Hat Network (RHN) and manually install it on the system.

For Red Hat Enterprise Linux 6 using yum-security plugin:

• To list all available erratas without installing them, run:

# yum updateinfo list available

• To list all available security updates without installing them, run:

 # yum updateinfo list security all
 # yum updateinfo list sec

• To get a list of the currently installed security updates this command can be used:

 # yum updateinfo list security installed

For Red Hat Enterprise Linux 5 using yum-security plugin

• To list all available erratas without installing them, run:

# yum list-sec

• To list all available security updates without installing them, run:

 # yum list-security --security

For both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 5:

• To list all available security updates with verbose descriptions of the issues they apply to:

 # yum info-sec

• Run the following command to download and apply all available security updates from Red Hat Network hosted or Red Hat Network Satellite:

 # yum -y update --security

NOTE: It will install the last version available of any package with at least one security errata thus can install non-security erratas if they provide a more updated version of the package.
• To only install the packages that have a security errata use

 # yum update-minimal --security -y

• yum-security also allows installing security updates based on the CVE reference of the issue. To install a security update using a CVE reference run:

 # yum update --cve <CVE>

e.g.

 # yum update --cve CVE-2008-0947

Viewing available advisories by severities:

 # yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 Important/Sec. kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64

If you want to apply only one specific advisory:

 # yum update --advisory=RHSA-2014:0159

However, if you would like to know more information about this advisory before to apply it:

 # yum updateinfo RHSA-2014:0159

For more commands consult the manual pages of yum-security with

 # man yum-security

If you face any missing dependency issue while applying security patches on system then refer to yum update --security fails with missing dependency errors.

Posted in Real life AIX.


reboot after patching?

The procedure described bellow applied to Linux (RedHat).
Even withing the same environment, patching done due to security concerns or something else has different meaning for different hosts. Some must be rebooted immediately to activate the “fixes”, while some may wait for a more appropriate occasion.

How to decide if a reboot can wait? Well, it depends (among others) on the location of the hosts. Is it in DMZ on not? Still, even if it is in DMZ a reboot may be delayed depending on the libraries effected by the “fix” (errata) and services using them.

For example, let’s say that we need to upgrade the glibc rpms due to just published errata CVE-2015-0235. To identify what services are using its functionality you could execute the following command:

$ lsof +c 15 | grep libc- | awk '{print $1}' | sort -u

From the resulting list, identify the public-facing services and restart them. Remember that while this process may work as a temporary workaround, it is not supported by Red Hat and, should a problem arise, you will be requested to reboot the system before any troubleshooting begins.

Posted in Linux.


How to monitor telnet traffic in AIX

1. Create /etc/security/authlog file containing the following lines:

#!/usr/bin/ksh
/usr/bin/logger -t tsm -p auth.info "`/usr/bin/tty` login from $@ " 

2. Make it executable

# chmod +x /etc/security/authlog 

3. Modify the "/etc/security/login.cfg" file adding the following two lines just under the default: stanza.

authlog:
program = /etc/security/authlog 

4. Modify the field "auth2" in the "/etc/security/user" file:

default:
...
auth2 = authlog
...

The above can be done for all the users by the default: or for a specific user modifying only the correspondent user stanza.

5. Configure syslogd to log the information:

# vi /etc/syslog.conf
...
*.info /var/adm/authinfo.log
...

6. Create the logfile

# touch /var/adm/authinfo.log

7. Restart syslogd

# stopsrc -s syslogd 

# startsrc -s syslogd

8. Log in and check the authinfo.log:

# cat authinfo.log 

You should see the successful logins.

Posted in AIX.

Tagged with , , .


executing commands remotely from Satellite server

to be able to execute commands on clients of RedHat Satellite server you have to equip them with the following rpm

# yum -y install rhncfg-actions

Next, execute the following command on the host:

# rhn-actions-control -enable-all

Finally, check if this directory structure exists /etc/sysconfig/rhn/allowed-actions/script and that it contains an empty file called run

# ls -l /etc/sysconfig/rhn/allowed-actions/script
total 0
-rw-r--r-- 1 root root 0 Jan 29 08:27 run 

Posted in LINUX.


Satellite server – syncing and cloning

Satellite server is something like a NIM server plus much more… Satellite server has “Base” channels and associated with them “Clone” channels. The latter ones are the sources of operating system (RedHat Linux) images (rpms) that a system administrator uses to patch his/hers hosts. Usually, clone channels are not updated automatically – the Base channels, on the other had are usually synchronized with the RedHaNetwork automatically (cron).
There are many possible ways to synchrinise Clone with its Base channel. One way which result is a new Clonned channel syncronized up to a specific date (that you provieded) is shown bellow.

First make sure you have the latest packages in the already existing on yout Satellute server Red Hat channel (in this case called rhel-86_64-server-6), to sync this channel with the latest packages execute the next command.

# satellite-sync -c rhel-86_64-server-6

Once the packages are synced, run the spacewalk-clone-by-date utility to create its clone as of today.

# spacewalk-clone-by-date -u satadmin \
                   -l rhel-x86_64-server-6
                   prod-rhel6u6-clone1224 \
                   -d 2014-12-24

The last command creates the new channel with name prod-rhel6u6-clone1224 containing the latest errata as of today.

Now, you have to associate your host with this particular clone and patch it with yum -y update .

In another post, I will show you how to synchronize an existing clone channel.

Posted in LINUX.

Tagged with , , , .




© 2008-2015 www.wmduszyk.com - best viewed with your eyes.