Skip to content


to the users of this blog


Unfortunately, there are some whose intentions are not noble and as the result I need to tighten security of this site.
Today, I had to delete all “wp-users” (subscribers) of this blog.

Posted in Real life AIX.


aix host memory and its usage

Today, Annwoy came to my cube with this little treasure – for you if you wonder how is your host memory used; if someone asks for what amount of the computational or non-computational memory is there? ……

#!/usr/bin/ksh
#memory calculator

um=`svmon -G | head -2|tail -1| awk {'print $3'}`
um=`expr $um / 256`
cm=`svmon -G | head -2|tail -1| awk {'print $6'}`
cm=`expr $cm / 256`
ncm=`expr $um - $cm`
tm=`lsattr -El sys0 -a realmem | awk {'print $2'}`
tm=`expr $tm / 1000`
fm=`expr $tm - $um`
echo "\n\n-----------------------";
echo "System : (`hostname`)";
echo "-----------------------\n\n";

echo "\n----------------------";
echo "Memory Information\n\n";
echo "total memory = $tm MB"
echo "free memory = $fm MB"
echo "used memory = $um MB"
echo "computational memory = $cm MB"
echo "non computational memory = $ncm MB"
echo "\n\n-----------------------\n";

This is a sample output:

-----------------------
System : (annwoy.edu)
---------------------
Memory Information

total memory = 67108 MB
free memory = 1622 MB
used memory = 65486 MB
computational memory = 17264 MB
non computational memory = 48222 MB
-----------------------

As I found out from Ramon, what takes a few lines of code can be accomplished with one command too – there is always more then one way to skin the AIX feline. 🙂

>svmon -G -O unit=MB
Unit: MB
---------------------------------------------------------------------
      size   inuse     free     pin     virtual  available   mmode
memory  57344.00  56289.60 1054.40  9235.80  22733.89 32762.27 Ded
pg space    4096.00        68.3

               work        pers        clnt       other
pin         7560.64           0        11.0     1664.19
in use     22733.89           0    33555.71

Posted in Real life AIX.

Tagged with , , .


Disk I/O tuning advice for AIX 6.1

Another nice document from Dan Braden  AIX 6.1 Disk I/O tuning presentation

Posted in Real life AIX.

Tagged with , , .


How to make RedHat files immutable?

Today, I found that LINUX file/directory object may be immutable! LINUX like any self respecting UNIX has the chmod, chown commands but in addition it has the chattr, which can make a file immutable (+i) to any change. It can “permanently” fix file/directory access time so it stays the same regardless of how many times the file is accessed +A. I really like the last one! If the s attribute is set on a file its blocks will be written with zeros on deletion making its data recovery impossible – security minded among us make a note!

[root@wmdql1 ~]# touch removeme
[root@wmdql1 ~]# ls -l removeme
-rw-r--r-- 1 root root 0 Jan 15 12:40 removeme

[root@wmdql1 ~]# lsattr removeme
-------------e- removeme

[root@wmdql1 ~]# chattr +i /root/removeme
[root@wmdql1 ~]# lsattr removeme
----i--------e- removeme

[root@wmdql1 ~]# chattr +A /root/removeme
[root@wmdql1 ~]# lsattr removeme
----i--A-----e- removeme

[root@wmdql1 ~]# chattr +s /root/removeme
[root@wmdql1 ~]# lsattr removeme
s---i--A-----e- removeme

[root@wmdql1 ~]# chattr -s /root/removeme
[root@wmdql1 ~]# lsattr removeme
----i--A-----e- removeme

[root@wmdql1 ~]# chattr -A /root/removeme
[root@wmdql1 ~]# lsattr removeme
----i--------e- removeme

[root@wmdql1 ~]# chattr -i /root/removeme
[root@wmdql1 ~]# lsattr removeme
-------------e- removeme

[root@wmdql1 ~]# ls -l removeme
-rw-r--r-- 1 root root 0 Jan 15 12:40 removeme
[root@wmdql1 ~]#

If you keep editing a file and your “staff” keeps on disappearing ….. remember this post and execute the lsattr command against your file. Who knows, maybe the file has been set to be “immutable” to changes which is the reason behind this post! 🙂

Posted in Linux, LINUX, Real life AIX.

Tagged with , , , , .


Allowing “others” to manage users without sudo

For the longest time, to delegate this part of AIX administrator job often called for the sudo command and an appropriate entry in the /etc/sudoers file.
Even know, most of us will turn and use sudo but why? There is at least one more way. Use what comes with AIX, user RBAC, use roles. AIX has many pre-defined roles appropriate for delegating management of different aspects of AIX to users freeing the root aka you to do something else.

This post, shows (without going into details) how to user roles to give the “user management authority” to somebody else who has a valid login to AIX host. As always one can use smitty or a command line entries to accomplish this task.

Execute smitty chuser and enter the correct user_name name. On the next screen find the entry labeled Roles and using the F4/F7 combination add the SecPolicy and AccountAdmin policies. When you are done your screen should look like:

ROLES                            [SecPolicy,AccountAdmin]

Do you see the comma character separating the policies above?

The equivalent command line directive:

# chuser roles='SecPolicy,AccountAdmin' user_name

You probably do not need to include the quotes in the previous entry. Finally, let’s activate these roles the next time the user_name logs-in:

# chuser default_roles=ALL user_name

Posted in Real life AIX.

Tagged with , , .


VMWare, RedHat and TSM backups

If you use all of the above and you discover that all your backups are always the FULL ones than the rest of this post is for you.

Posted in LINUX, Real life AIX.


re-zoning AIX guest’s FC adapters

There was a freshly built AIX host which was fully virtualized – its SCSI, FC and LAN adapters were all virtual provided by two VIO servers. It was the time to add (zone) some LUNs to this machine and SAN administrator was given its FC adapters WWPNs. In a few minutes, the response came back and it read these “three adapters out of four are on the same fabric!” AIX administrator said “oops” and scratched his head. SAN administrator added – “C0507603A41C005C is on the wrong side, it should be with 5E“.

This post follows the steps taken to attach FC adapters to corrects SAN fabrics. Open the next page if this “news” are worthy of your time.

Posted in AIX, Real life AIX, VIO.

Tagged with , , , , , , , .


LINUX kernels and their removal

Today, I had to execute a security scan against some of mine Red Hat hosts and surprisingly (at least to me) the results were not what I have expected ……. Not to mentioned that the side effect was my AD account being “LOCKED OUT ON THIS DOMAIN CONTROLLER” preventing me from log-in to over one hundred of hosts. Looking at the report documenting the offenses, I recognize that it is not that “my” hosts are at fault but it is the “scanner” fault, of course! 🙂

Apparently, McAfee “looks” not just for the running but all LINUX kernels present on a host. So even if I did yum -y upgrade and immediately followed it with another scanner run the process will flag this host as a “failure” because of the presence of the older kernels. It comes back to me now. Years ago, when I worked with the Interactive UNIX (the origin of SUN and AIX) I had to deal with multiple kernels – once or twice I had to remove some to gain back storage capacity on a host.

You may already know the question of today but if you don’t do not worry too much – here it comes: “how to list the kernels and how to remove them from a RedHat machine?”

To list kernels on a RedHat host, execute:

# rpm -qa kernel
kernel-2.6.32-279.19.1.el6.x86_64
kernel-2.6.32-279.14.1.el6.x86_64
kernel-2.6.32-279.11.1.el6.x86_64

To list your current kernel (the short version):

# uname -r
2.6.32-279.19.1.el6.x86_64

To list your current kernel (the long version):

# uname -mrs
Linux 2.6.32-279.19.1.el6.x86_64 x86_64

The last two entries tell us that the running 2.6.32-279.19.1.el6.x86_64 kernel (active) is the most up to date one. So to remove the other (non active) kernels, I have to execute these two steps:

# rpm -e kernel-2.6.32-279.11.1.el6.x86_64
# rpm -e kernel-2.6.32-279.14.1.el6.x86_64

To verify that there is just one kernel left – the one I wanted to keep:

# # rpm -qa kernel
kernel-2.6.32-279.19.1.el6.x86_64

Is there a way to switch kernels on a live RedHat hosts so when it boots next time it uses a different kernel? I know that a kernel selection can be made at boot time. Do you know about any other way? If so please let us all know too, thanks!

I feel, this post would not be complete without this message:

To install kernel packages manually, use "rpm -ivh [package]". Do not use "rpm -Uvh" as that will remove the running kernel binaries from your system. You may use "rpm -e" to remove old kernels after determining that the new kernel functions properly on your system.

Posted in Linux, Real life AIX.

Tagged with , .


Processors, cores, sockets, etc ….

Number of processors in AIX refers to the number of cores, never sockets or chip modules. However, it’s hard to tell from your output whether it’s referring to logical processors, virtual processors, or physical processors unless you can tell me what command you actually used and whether you ran it from an LPAR with only some of the resources, whether SMT is turned on, and if virtual processors are used.
Take a look bellow, maybe you will find answers to your questions among these commands and their output.

The command lsdev -Cc processor will show the number of physical processors (or virtual processors) in a shared processor LPAR.

# lsdev -Cc processor
proc0 Available 00-00 Processor
proc2 Available 00-02 Processor

The command lparstat -i shows the virtual and logical processors.

# lparstat -i | grep CPU
Online Virtual CPUs : 2
Maximum Virtual CPUs : 15
Minimum Virtual CPUs : 1
Maximum Physical CPUs in system : 2
Active Physical CPUs in system : 2
Active CPUs in Pool : 2
Physical CPU Percentage : 25.00%

The command topas -L shows logical processors but mpstat shows virtual ones.
SMT thread processors are seen with bindprocessor

# bindprocessor -q
The available processors are: 0 1 2 3

The next command also delivers SMT information.

# lsattr -El proc0
frequency 1498500000 Processor Speed False
smt_enabled true Processor SMT enabled False
smt_threads 2 Processor SMT threads False
state enable Processor state False
type PowerPC_POWER5 Processor type False

The next two commands also deliver CPU based information.

# lscfg -v | grep -i proc
Model Implementation: Multiple Processor, PCI bus
proc0 Processor
proc2 Processor
# prtconf | pg
System Model: IBM,9111-520
Machine Serial Number: 10EE6FE
Processor Type: PowerPC_POWER5
Number Of Processors: 2
Processor Clock Speed: 1499 MHz
CPU Type: 64-bit
Kernel Type: 64-bit

Posted in AIX, Real life AIX.

Tagged with , , , , .


A New Year’s Resolution?

Today, on my schedule was a simple task of relocating cluster resources (its volume groups and the service address) from one cluster node to another. The application administrator decided to shut it down ahead of time to speed up the fail over – it was lunch time and we wanted to do whatever was possible to shorten this event. A few minutes later, I synced cluster resources from each of its nodes – just for good measure – I thought while doing it.
The relocation of cluster resources started as planned. To track its progress, on the “target” node I executed the command tail –f /var/hacmp/log/hacmp.out. It did not take long for failure messages to show up on the screen of the “source” node. It smitty screen displayed the following lines:

Command: failed        stdout: yes           stderr: no

Before command completion, additional instructions may appear below.
Attempting to move resource group EpicTrain_RG to node epctrtpu001.
Waiting for the cluster to process the resource group movement request....
Waiting for the cluster to stabilize.......................

ERROR: Event processing has failed for the requested resource group movement.  The cluster is unstable and requires manual intervention to continue processing.

Woo! What is going on here? It was already impossible to access the source node – a sure indication that the cluster service address has already been removed (IP alias) from the source node network interface. I opened a new terminal session and logged in using the “routable” IP alias placed on the source node “boot” adapter. Executing the command lsvg –o showed that one volume group was still active – the error message did not lie, I thought. The next command df showed a mounted file system. Executing lsvg –l command with the name of the active volume group proved that the mounted file systems belong to this group.

Looking at the situation, I developed the following plan; I will unmount the stubborn file systems, vary their volume group off and on the “target” node, I will manually import this volume group and next, its automatic ability to come on-line (vary on) will be disabled and the group will be varied off. Finally, all cluster nodes will be re-booted, synchronized and cluster services will be started on all of them and the relocation will be tried one more time.
Content with the plan, I executed the umount command against the first offending file system. It took a while for this command to fail – “something” or someone was using it. It was time to behave like the master of this cluster. No more mercy, it is getting late and I am getting really hungry. The following snippet, executed from the command line was employed to un-mount the file systems.

# for fs in `lsvgfs modupgrade_vg`
> do
> fuser -kuxc $fs;umount $fs
> done
/epic/redalert:        1c(root) 2293912c(root) 2359472r(root) 2490554c(root) 2621560c(root) 2883634r(root) 3408060r(root) 3670132r(root) 3735672r(root) 3801208r(root) 3866746r(root)

As soon as I hit the last “Enter” the screen showed some processes being killed (courtesy of the fuser -kuxc command) and the screen froze. Yes, this node was going for a reboot! What is going on here? Wait a moment, the screen on the other node started to change too. The /var/hacmp/log/hacmp.out log on the target node came alive – the resources relocation finally started!
After a short while the cluster service IP address became available again, and the application administrator logged in to tend to her application. It did not take long for Lana to call me – “Mark, not all file systems survived this relocation”. This is going to be one late lunch indeed; I logged on and started to look around. Both nodes showed the same volume groups and identical count of file systems. How then the target node did not have the same file systems as the source?

Have you ever wondered, how does the df command work? We can speculate that it reconciles information delivered via the “lsvg” command (logical volume names, their sizes and the size of the physical partition) with the stanzas present in the file /etc/filesystems, which tie together logical volumes with file systems and their attributes. A file system name is defined by the stanza label, its logical volume and any additional attributes are contained in the stanza body. If you take this reasoning a bit further, you may see that the same logical volume may be “mounted” at different occasions using different file system names aka stanza labels aka mount points (directories).
Try it for yourself. Un-mount a file system, create a new mount point with the mkdir command, replace the label of the appropriate stanza in the /etc/filesystem file and mount using the new directory name. It works, right?

The file /etc/filesystems has different dates and sizes on each of my nodes; their contents are not identical. Are you thinking what I’m thinking? Using vi, I compared the contents of these files on each cluster node. It did not take long to realize that the “missing” file systems are not really missing but that they are associated with a different mount points – the were renamed. Look bellow. On one node has it as /epic/rtlupg02” but on the other has it as “/epic/rlsupg22”.
Source node example entry:

/epic/rtlupg02:
dev             = /dev/trone02_lv
vfs             = jfs2
log             = INLINE
mount           = false
check           = false
account         = false

Target node example entry:

/epic/rlsupg02:
dev             = /dev/trone02_lv
vfs             = jfs2
log             = INLINE
mount           = false
check           = false
account         = false

Shortly later, it became obvious that not only that the target’s node /etc/filesystems is different but also that this node does not have the directories (mount points) associated with the “missing” file systems. It became obvious that some files were renamed on the source node and that this information has not been propagated to the remaining nodes in the cluster. To recover was not difficult. Un-mount the selected file systems, create new mount points and adjust their ownership, edit appropriate stanzas, mount the re-named file systems aka the “missing” ones, call application administrator to verify and to start the application.

Now it is time to answer this question that everybody has in mind. What did happen? I cannot say why the particular volume group of the resource group did not move to the target node. I have no idea why the operating system on the source node was not able to un-mount its file systems and consequently vary this volume group off. This mystery forever belongs to the 1% of computer science commonly known as witchcraft or maybe this was just my karma?
On the other hand, the mystery of the missing file systems is not a mystery at all. Somehow, someone using AIX LVM instead of PowerHA LVM renamed a number of file systems on the source node and as the result the /etc/filesystems on the both nodes were no longer the same. But I do not have the luxury of pondering this issue. The line is quite long; reset a few passwords, install two lpars, expand a file system, find out why two people have apparently identical login and so forth.

A few hours later, while on a train going home I suddenly experienced a sort of spiritual awakening. Suddenly, the memories returned and I knew exactly why the /etc/filesystems from the few hours ago were different. It was me! Yes, it was me! In an early October, Sandy came and soaked us with heavy rain. It also brought with it heavy winds that destroyed some trees which on their way down to the ground took with them aerial fiber links connecting us with our data centers. As the result one of these data centers was effectively isolated and unreachable for a several days. To provide computer services to users, we broke clusters and their mirrors and started provided services from the nodes in the available data center. In the case of this particular cluster, a few days later its application administrators requested to rename some file systems. Without contact with the “remote” node, the local LVM was employed to answer this request. After the fact, I made a mental note to myself to reconcile the contents of /etc/fileystems as soon as all nodes join the cluster. After the connectivity was restored nobody wanted to go for another downtime and fully synchronize and verify the clusters. We were all very happy with restored access to all data centers, all hosts accounted for and all clusters up and running again. The memories of previous intentions and the things to remember completely faded away.

This story shows that in order to work with a lesser amount of stress, sysadmin must be better organized. I have to develop and follow a system that will help me keep track of everything that was put aside to be done at a later date.

What immediately comes to mind is an edit of ~/.profile file with a colored echo statement stating what has to be done and when. Afterwards, every time I login a colorful banner will remind me about it.

Posted in Real life AIX.




Copyright © 2015 - 2016 Waldemar Mark Duszyk. - best viewed with your eyes.. Created by Blog Copyright.