Skip to content


multiple search bases in ldap and AD

time required to search for a data is function of its repository size, right? The same applies to LDAP and AD – they are both data repositories. In their case, one may speed up the search using multiple search bases. Case in point, in my Active Directory users are defined in one of the following places:

OU=Secured,OU=Corporate Users,DC=wmd,DC=edu
OU=Users,OU=Ping,ou=Managed By Others,DC=wmd,DC=edu
OU=Users,OU=Research,ou=Managed By Others,DC=wmd,DC=edu
OU=ServiceAccounts,OU=Corporate Servers,DC=wmd,DC=edu

Their UNIX group definitions are stored here:

OU=Unix,OU=Security Groups,OU=Corporate Groups,DC=wmd,DC=edu

In case of Redhat, Centos, Scientific host using nslcd one limits the scope of LDAP searches adjusting the contents of the /etc/nslcd.conf:

# Customize certain database lookups.
base passwd OU=Secured,OU=Corporate Users,DC=wmd,DC=edu
base passwd OU=Users,OU=Ping,ou=Managed By Others,DC=wmd,DC=edu
base passwd OU=Users,OU=Research,ou=Managed By Others,DC=wmd,DC=edu
base passwd OU=ServiceAccounts,OU=Corporate Servers,DC=wmd,DC=edu
base group OU=Unix,OU=Security Groups,OU=Corporate Groups,DC=wmd,DC=edu 

Any change in this file must be associated with a mandatory nslcd daemon refresh.

# service nslcd restart

IN the case of AIX, one goes straight to /etc/security/ldap/ldap.cfg.

userbasedn OU=Secured,OU=Corporate Users,DC=wmd,DC=edu
userbasedn OU=Users,OU=Ping,ou=Managed By Others,DC=wmd,DC=edu
userbasedn OU=Users,OU=Research,ou=Managed By Others,DC=wmd,DC=edu
userbasedn OU=ServiceAccounts,OU=Corporate Servers,DC=wmd,DC=edu
groupbasedn OU=Unix,OU=Security Groups,OU=Corporate Groups,DC=wmd,DC=edu 

The change it this file must be followed with the refresh of ldap daemon. One way of doing it is shown bellow.

# restart-secldapclntd

Posted in Real life AIX.


repository (vtopt0) issue while patching vios

It is my turn to patch vio servers….. :-)

vioaprpu001:/home/padmin>updateios -commit
User can not perform updates with media repository(s) loaded.
Please unload media images.

What is going on here? What is loaded and where?

vioaprpu001:/home/padmin>lsvopt
VTD             Media                              Size(mb)
vtopt0          RH6.2.iso                          unknown

Really? Someone played with RedHat? Apparently so, except now the repo seems not to be really OK……… The next command should unload a repo.

vioaprpu001:/home/padmin>unloadopt -vtd vtopt0
Device "vtopt0" is not in AVAILABLE state.

We need to get more info about this repository like its adapters, state and whatever else we can find.

vioaprpu001:/home/padmin>lsmap -all
.......
------------- -------------------------------------------------
vhost13              U9117.MMB.1060FFP-V1-C19      0x00000000

VTD                   vtopt0
Status                Defined
LUN                   0x8100000000000000
Backing device        /var/vio/VMLibrary/RH6.2.iso
Physloc
Mirrored              N/A

It should be Available not Defnined. Definitely, someone built it, and dismembered it but partially. Well, let’s get rid of it.

vioaprpu001:/home/padmin>rmvdev -vtd vtopt0
vtopt0 deleted

Now, going back to the patching I should be doing today.

vioaprpu001:/home/padmin>updateios -commit
All updates have been committed.

……. and whatever steps follows……

Posted in Real life AIX.


user creation/password encryption in RedHat

To create a new user account simultaneously setting its password is a two step procedure.

a. encrypt the password (in this case the password is abc123)

# openssl passwd
Password:
Verifying - Password:
2n50KL0pCn096

b. call the useradd command to do the rest

# useradd -c 'testing gecos' -g oinstall \
-m -d /home/testing -p 2n50KL0pCn096 newuser

In the last case, the newuser login account will be associated with the group/groups called oinstall.

# groups newuser
newuser : oinstall

But if we create it replacing -g with -G:

# useradd -c 'testing gecos' -G oinstall \
-m -d /home/testing -p 2n50KL0pCn096 newuser

The new account primary group will be created automatically and called newuser and its group membership will be the list showing the following two grous newuser and oinstall.

# groups newuser
newuser: newuser, oinstall

Posted in Real life AIX.


list packages installed (and not) from a specific repository

Sometimes you wonder what packages come from what repository …? Here is the answer:

# yumdb search from_repo RepoName

For example, to learn what packages come from the epel repository you execute:

# yumdb search from_repo epel | grep -v '='
Loaded plugins: product-id, rhnplugin
This system is receiving updates from RHN Classic or RHN Satellite.
epel-release-6-8.noarch
facter-1.6.18-7.el6.x86_64
fping-2.4b2-10.el6.x86_64
gperftools-libs-2.0-11.el6.3.x86_64
libmongodb-2.4.12-2.el6.x86_64
libunwind-1.1-2.el6.x86_64
mongodb-2.4.12-2.el6.x86_64
..................................

How to list packages in a repository? First, generate the listing of repositories the host subscribes to with yum repolist.

# yum repolist
Loaded plugins: product-id, rhnplugin, security, subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite.
repo id            repo name                              status
clone-rhel-x86_64-server-6            .....................
clone-rhel-x86_64-server-optional-6   .....................
clone-rhn-tools-rhel-x86_64-server-6 ......................
epel                                  .....................
vmware-tools                          .....................

To list what packages can be installed from the repository called vmware-tools:

# yum --disablerepo "*" --enablerepo "vmware-tools" list available
Loaded plugins: product-id, rhnplugin, security, subscription-manager
This system is receiving updates from RHN Classic or RHN Satellite.
Available Packages
...............................
vmware-tools-core.x86_64     8.6.5-2                    vmware-tools
vmware-tools-esx.x86_64      8.6.5-2                    vmware-tools
...............................

Posted in Real life AIX.

Tagged with , , , .


remote command execution with ssh

Executing a command that wants to write to the remote screen can fail if ssh invocation does not include the -t parameter.

# for h in `cat HoststoFix`
do
    ssh -t $h 'cat /etc/security/ldap/ldap.cfg | \
    grep oldserver && \
    vi +%s/oldeserver/newserver/g +wq /etc/security/ldap/ldap.cfg'
done

Posted in Real life AIX.

Tagged with , , , .


execute commands from RH Satellite on its clients

….using osad services. To set it up follow these steps.

On your Satellite server:

a. load the dispatcher rpm.

# yum -y install osa-dispatcher

b. check it in.

# chkconfig osa-dispatcher on

c. start it so you can use it right away.

# service osad-dispatcher start

On your RH Satellite client (host) side:

a. make sure it subscribes to the RHN Tools child channel of your given release

b. install these two packages (the scheduler client and RHN enabler)

# yum -y install osad rhncfg-actions

c. enable remote command execution. The next steps let you visualize this process

[root@rcpsqlpl1 ~]# rhn-actions-control --report
deploy is disabled
diff is disabled
upload is disabled
mtime_upload is disabled
run is disabled
# rhn-actions-control --enable-run
# rhn-actions-control --report
deploy is disabled
diff is disabled
upload is disabled
mtime_upload is disabled
run is enabled
# rhn-actions-control --enable-all
[]# rhn-actions-control --report
deploy is enabled
diff is enabled
upload is enabled
mtime_upload is enabled
run is enabled

d. check in osad service so it starts on boot.

# chkconfig osad on

e. start it now so you can use its services immediatelly.

# service osad start

Posted in Real life AIX.

Tagged with , .


to build AIX SystemMirror 7.1 from command line

Attempts to migrate from SystemMirror 6.1 to 7.1 failed, were forgotten for a long time and now the “cluster” has to happen. Before this cluster got forgotten, its nodes were “messed” up by different admins attempting to do “magic” that never worked. Today, both nodes experience cleansing aka removal of the existing cluster definitions, filesets, unification and cleaning of /etc/hosts, /etc/cluster/rhosts and were assigned proper and consistent node/unames/hostnames. The names are very important and should never be changed after cluster built, really! This post does not document cluster upgrade via “migration” – this is a brand new installation/cluster configuration process. We installed SystemMirror 7.1.3.0 followed with the upgrade_all to 7.1.3.2. Leather, rinse and repeat on the other node. When done, reboot both nodes.

When a cluster node boots it get assigned an IP address that we call its “boot IP”. Each cluster node has one unique boot IP from the same network. Each node can ping its partner node in the future cluster (this requires using router assigned to the cluster service address network). I use the short label in the /etc/hosts associated with the boot address is used as the node name. The last statement means that the hostname -s and uname commands will generate the same output. For this case it will be node2b and node1b respectivelly.

The /etc/hosts file was formatted following the format required by SystemMirror) which in my case looks like this:

127.0.0.1       loopback localhost
10.254.245.72   node1b.wmd.edu   node1b
10.254.245.73   node2b.wmd.edu   node2b
10.19.80.108    node2
10.19.80.31     node1
10.19.80.112    egtapqu002      # cluster Service IP

Do you see the numerics, fully qualified hostname and finally its short version. You do not need to follow this rule for the non-bootable entries. Our cluster will contain two nodes node1b and node2b. The non routable, "bootable" network is use to pass Ethernet packets, heartbeats, etc. So in order for clients to login to these hosts we have to assign to each node network interface an IP alias from a "routable" network. We will assign cluster "service" address to the same routable network as the two aliases

After software installations are done, nodes rebooted, their hostname/uname verified (node1b and node2b) we need to edit /etc/cluster/rhosts so it only contains the bootable IP addresses and the ones associated with nodes aliases:

10.254.245.72
10.254.245.73
10.19.80.108
10.19.80.31

At this time, it is also a good idea to edit /usr/es/sbin/cluster/netmon.cf so it contains addresses of external routers, time servers, etc. The "something" should live on different networks that the one used by our aliases and the service address.

Let's the fun begin! We will build our new cluster from command line. Create a two node cluster called egtaptq2 using hdisk8 as the pass-through/heartbeat disk.

node1b:TP:/> clmgr add cluster egtaptq2 \
                    nodes=node1b,node2b \
                    type=NSC \
                    heartbeat_type=unicast \
                    repository_disk=hdisk8

Cluster Name:    egtaptq2
Cluster Type:    Stretched
Heartbeat Type:  Multicast
Repository Disk: None
Cluster IP Address: None

There are 2 node(s) and 1 network(s) defined

NODE node2b:
        Network net_ether_01
        node2b    10.254.245.73

NODE node1b:
        Network net_ether_01
        node1b    10.254.245.72
.............

Let check if this is a STREATCHED cluster. Executing /usr/es/sbin/cluster/utilities/cltopinfo delivers:

Cluster Name:    egtaptq2
Cluster Type:    Standard
Heartbeat Type:  Unicast
Repository Disk: hdisk7 (00f660fd661b72a4)

There are 2 node(s) and 1 network(s) defined

NODE node2b:
        Network net_ether_01
        node2b    10.254.245.73

NODE node1b:
        Network net_ether_01
        node1b    10.254.245.72

No resource groups defined

So the cluster has been indeed created as a Standard one with UNICAST as the broadcasting mode, nice. Executing lspv, I could not see the caavg_private volume group built on the selected heartbeat disk so I requested cluster synchronization.

node1b:TP:/>clmgr sync cluster 

When the sync finished the cluster “heartbeat” group was in.

node1b:TP:/>lspv | grep caa
hdisk7          00f660fd661b72a4    caavg_private   active

Time to do the persistent addresses (aliases on the routable network).

node1b:TP:/>clmgr add persistent_ip 10.19.80.31 \
                  network=net_ether_01 \
                  node=node1b

node1b:TP:/>clmgr add persistent_ip 10.19.80.108 \
                 network=net_ether_01 \
                 node=node2b

Validate our action:

node1b:TP:/>clmgr -v q pe
NAME="node2"
IPADDR="10.19.80.108"
NODE="node2b"
NETMASK="255.255.255.0"
NETWORK="net_ether_01"
INTERFACE=""
NETTYPE="ether"
TYPE="persistent"
ATTR="public"
GLOBALNAME=""
HADDR=""

NAME="node1"
IPADDR="10.19.80.31"
NODE="node1b"
NETMASK="255.255.255.0"
NETWORK="net_ether_01"
INTERFACE="en0"
NETTYPE="ether"
TYPE="persistent"
ATTR="public"
GLOBALNAME=""
HADDR=""

Reboot again and watch each node come back with two addresses on its en0 interface - the bootable and the routable alias. At this time, you do not HMC to login to these nodes.

Not much left, define application controller (the holder of the start/stop scripts)

node1b:TP:/>clmgr add application_controller egate2Ctrl \
     startscript=/usr/es/sbin/cluster/scripts/start_cluster.ksh \
     stopscript=/usr/es/sbin/cluster/scripts/stop_cluster.ksh

and validate it.

node1b:TP:/>clmgr -v q ac
NAME="egate2Ctrl"
MONITORS=""
STARTSCRIPT="/usr/es/sbin/cluster/scripts/start_cluster.ksh"
STOPSCRIPT="/usr/es/sbin/cluster/scripts/stop_cluster.ksh"

Let's create a resource group called egtqa2RG:

node1b:TP:/>clmgr add resource_group egtqa2RG \
                  nodes=node1b,node2b \
                  startup=OHN \
                  fallover=FNPN \
                  fallback=NFB \
                  service_label=egtapqu002 \
                  applications=egate2Ctrl \
                  volume_group=egtapq002_vg \
                  fs_before_ipaddr=true

and validate it.

node1b:TP:/root>clmgr -v q rg
NAME="egtqa2RG"
CURRENT_NODE=""
NODES="node1b node2b"
STATE="UNKNOWN"
TYPE=""
APPLICATIONS="egate2Ctrl"
STARTUP="OHN"
FALLOVER="FNPN"
FALLBACK="NFB"
NODE_PRIORITY_POLICY="default"
NODE_PRIORITY_POLICY_SCRIPT=""
NODE_PRIORITY_POLICY_TIMEOUT=""
DISK=""
VOLUME_GROUP="egtapq002_vg"
FORCED_VARYON="false"
FILESYSTEM=""
FSCHECK_TOOL="fsck"
RECOVERY_METHOD="sequential"
........

At this stage, we synchronize the cluster and check its status.

node2b:VF:/root>clmgr -cv -a name,state qe node
# NAME:STATE
node2b:OFFLINE
node1b:OFFLINE

To start the cluster on node2

node1b:TP:/root>clmgr start cluster when=now \
          manage=manual clinfo=true fix=interactive
....
node1b: Feb  6 2015 08:06:55
node1b: Completed execution of /usr/es/sbin/cluster/etc/rc.cluster
node1b: with parameters: -boot -N -M -b -i -C interactive -P cl_rc_cluster.
node1b: Exit status = 0

Start resource group on the other node

node1b:TP:/root> clmgr online resource_group egtqa2RG node=node2b

Where is our resource group?

node1b:TP:/root> clmgr -cv -a name,state,current_node q rg
# NAME:STATE:CURRENT_NODE
egtqa2RG:ONLINE:node2b

Take our resource grop down:

node2b:VF:/root>clmgr offline resource_group egtqa2RG node=node2b
Attempting to bring group egtqa2RG offline on node node2b.
Waiting for the cluster to process the resource group movement request....
Waiting for the cluster to stabilize......

Well, this is all for now. I cannot do the failover because only one node has all the required resources. Soon, we will get more memory and cpus so the other node will match its partner and we will be able to put the cluster through its paces. Till then, enjoy your weekend - it is getting close :-)

Posted in Real life AIX.

Tagged with , , .


IntegrityError: null value in column “package_id” violates not-null constraint

This messages was taken from an email recived from a RedHat Satellite server…. Apparently Satellite has issues synchonizing its packages with RHN.
This issue is caused due to a bad satellite-sync cache on the satellite server. First, try to clear the cache as showned below and then sink a specific satellite channel or all of its channels at once:

  # rm -rf /var/cache/rhn/satsync/* 

This step takes a while to run. When its done, once again sync the Red Hat Satellite server specific channel:

  # satellite-sync -c <channel-label> 

or all of its channels at once:

  # satellite-sync 

Posted in Linux, Real life AIX.

Tagged with , , .


Building SystemMirror 7.1 cluster

Many months ago, I attempted to migrate from 6.1 5o 7.1 and all my attempts failed one by one. The cluster would not sync and there seem to be no way to get it up and running. Eventually, I gave up, put it on a “back burner”. A week ago I had time and the nodes in the “cluster” had time for another shot.

I started by removing cluster definition (using smitty hacmp) followed by removal of all “hacmp cluster” related file sets. Next I installed them again (SystemMirror 7.1.3.0) followed with upgrade_all using the 7.1.3.2 code. At this time, I and re-created /etc/hosts and /etc/cluster/rhosts, I made sure they were identical on each node.

Each node had the same hostname/uname as the label in /etc/hosts associated with it boot IP address. Next, since our bootable addresses are not routable, each node received an IP alias on the same network as the “service” address followed with setting the gateway address on the bootable interfaces. Yes, both nets use the same netmask!

Reboot both nodes start clcomd and configure the cluster. It took a few sync failures before the Sun start shining in my neck of the woods. Few first sync's failed with no apparent reason (asking to contact IBM…) but I noticed that there was no heartbeat volume group and the associated with it file system aka caavg_private. What helped is shown bellow. You guessed it – it was executed on each node.

# export CAA_FORCE_ENABLED=1
# rmcluster -f -r hdisk9
# rmcluster -f -r hdisk10
# rmdev -dl hdisk10
# rmdev -dl hdisk9
# cldare -rtV normal

# shutdown -Fr

Originally (and the plan still has not been changed), I set hdisk10 as the heartbeat disk and hdisk9 as its backup so I went back to the cluster configuration menu and set them up again.

The next sync almost succeeded but it failed as an entry was found missing in the /etc/snmpv3.conf file. Why in this file? I had no idea – we use snmp ver.2. But I followed and added the missing entry. Here it is:

smux 1.3.6.1.4.1.2.3.1.2.1.5 clsmuxpd_password

Another sync, which took really long and the long awaited OK prompt showed up! Next week, I have another 7.1.3.2 cluster to build.
But this time I will try to set it up via a command line alone – I have never done it, it should be fun!
:-)

Posted in HACMP, Real life AIX.

Tagged with , , .


to reboot RedHat host in a future

it could be done with the at command or with crontab -e executed as root or using the plain old reboot provided with the appropriate time, like for example

#  nohup shutdown -r 13:00 &

..................
The system is going down for reboot in 90 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:00 ...

The system is going down for reboot in 60 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:15 ...

The system is going down for reboot in 45 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:30 ...

The system is going down for reboot in 30 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:45 ...

The system is going down for reboot in 15 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:51 ...

The system is going down for reboot in 9 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:52 ...

The system is going down for reboot in 8 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:53 ...

The system is going down for reboot in 7 minutes!

Broadcast message from duszyk@sysmgttl1
        (unknown) at 12:54 ...

The system is going down for reboot in 6 minutes!

The messages might be a bit annoying if not redirected but on the other hand they might server as a reminder….?

Posted in LINUX.

Tagged with , , , .




© 2008-2015 www.wmduszyk.com - best viewed with your eyes.