Overview
You delete by mistake (an Intermediary) from the Manage -> Network view in Aurea Monitor Management Server (AMS) Enterprise version. To have the node reappear, you stop the node, delete it also from Active Agents under Deployment -> Agents and restart the node. But node does not reappear in Network view, just under Active Agents.
Suspecting a caching issue, you remove the Deploy folder from the $AI_Launcher_Home on the AI node and restart the node. This leads to duplicate nodes in the Active Agents list but the node will not become visible in the Network view.
Solution
Nodes in AMS are uniquely identified using their machine names.
Having duplicate nodes in the Active Agents list, even after removing the Deploy folder from the $AI_Launcher_Home folder and restarting the node, is a symptom that your agent is working fine, but there is an issue with the node (host) identification.
Here are a few use cases where this can happen:
- your second node is a cloned virtual machine (VM) having the same (internal) machine ID / SID as the one it was cloned from
- there might be a load balancer between your AMS and the Intermediary nodes which might be altering the source IP or hostname
Ensure unique machine ID / SID
If the VM that is not showing up in the Network View is a VM that was copied or cloned from an existing VM, verify its machine ID / SID. This can be done:
- on Linux: using the commands "hostnamectl" or "cat /etc/machine-id"
- on Windows: using the Sysinternals utility psgetsid
Once you have the machine ID / SID compare it with the source VM's machine ID. If they match discuss with your VM administrator to ensure that the cloned VM has a different/unique machine ID / SID.
If ensuring a unique machine ID is not possible, try the method described below about how to Override the unique identifier of each machine using JVM arguments on the profile.
Verify whether the load balancer is altering the source IP or hostname
If any of the requests to or from your monitored nodes goes via the load balancer, then verify whether has any of these headers set:
- X-Client-IP
- WL-Proxy-Client-IP
- HTTP_X-Forwarded-For
- X-Forwarded-For
Usually, these headers are used to preserve the origin of the request, in other words, the client IP or hostname.
If the load balancer is altering IP or hostnames and this needs to be kept for whatever reason, try the method described below about how to Override the unique identifier of each machine using JVM arguments on the profile.
Override the unique identifier of each machine using JVM arguments on the profile
Note: The prerequisite for this method to work correclty is version 2021 (v12) HF2.
If none of the previous methods work or are applicable, the last resort is to use Aurea Monitor's capabilities and override the unique identifier reported by each machine.
You can do this by providing each of the following JVM arguments on the profile of each node in AMS:
- com.actional.lg.interceptor.sdk.ServerInstance
- actional.agent.nodeid
- actional.agent.nodename
For example, if the first node has a host name server01, and the second (eg: the cloned VM) is server02, than you would need to adjust the launcher profile for each of them by adding to the Additional JVM Options the following parameters:
- for the launcher profile for server01 add these:
- -Dcom.actional.lg.interceptor.sdk.ServerInstance=server01
- -Dactional.agent.nodeid=server01
- -Dactional.agent.nodename=server01
- for the launcher profile for server02 add these:
- -Dcom.actional.lg.interceptor.sdk.ServerInstance=server02
- -Dactional.agent.nodeid=server02
- -Dactional.agent.nodename=server02
Once profiles are adjusted, provision them.
The above settings will ensure that both nodes will show up as dedicated managed nodes.