Many months ago, I attempted to migrate from 6.1 5o 7.1 and all my attempts failed one by one. The cluster would not sync and there seem to be no way to get it up and running. Eventually, I gave up, put it on a “back burner”. A week ago I had time and the nodes in the “cluster” had time for another shot.
I started by removing cluster definition (using
smitty hacmp) followed by removal of all “hacmp cluster” related file sets. Next I installed them again (SystemMirror 126.96.36.199) followed with
upgrade_all using the 188.8.131.52 code. At this time, I and re-created
/etc/cluster/rhosts, I made sure they were identical on each node.
Each node had the same hostname/uname as the label in
/etc/hosts associated with it boot IP address. Next, since our bootable addresses are not routable, each node received an IP alias on the same network as the “service” address followed with setting the gateway address on the bootable interfaces. Yes, both nets use the same netmask!
Reboot both nodes start
clcomd and configure the cluster. It took a few sync failures before the Sun start shining in my neck of the woods. Few first
sync's failed with no apparent reason (asking to contact IBM…) but I noticed that there was no heartbeat volume group and the associated with it file system aka
caavg_private. What helped is shown bellow. You guessed it – it was executed on each node.
# export CAA_FORCE_ENABLED=1 # rmcluster -f -r hdisk9 # rmcluster -f -r hdisk10 # rmdev -dl hdisk10 # rmdev -dl hdisk9 # cldare -rtV normal # shutdown -Fr
Originally (and the plan still has not been changed), I set
hdisk10 as the heartbeat disk and
hdisk9 as its backup so I went back to the cluster configuration menu and set them up again.
The next sync almost succeeded but it failed as an entry was found missing in the
/etc/snmpv3.conf file. Why in this file? I had no idea – we use
snmp ver.2. But I followed and added the missing entry. Here it is:
smux 184.108.40.206.220.127.116.11.18.104.22.168 clsmuxpd_password
Another sync, which took really long and the long awaited
OK prompt showed up! Next week, I have another 22.214.171.124 cluster to build.
But this time I will try to set it up via a command line alone – I have never done it, it should be fun!