Genie's Tech Blog

Where knowledge has no dimensions

Verifying ACI fabric with APIC CLI

Hello Friends,

Today, I am going to share how you can verify the ACI fabric from the APIC CLI (Command Line Interface). Though we can do the same thing from the GUI but the fastest We also need to know how to look at things from the CLI. For the demonstration, we shall be using the below topology:

 

First of all, We need to understand the directory structure that is maintained by the APIC controller. Once you login into the APIC controller using the ssh, you will be in the home directory of the user admin. There you can see three directories:

- aci
- debug
- mit

Out of these three directories, the one that we are interested in is the aci directory, If you notice the contents of this directory, you will see that the sub-directories are same as that you see on the GUI.

Output from the APIC Controller:
=====================
admin@apic1:~> 
admin@apic1:~> ls
aci  debug  mit
admin@apic1:~> pwd
/home/admin
admin@apic1:~> cd aci
admin@apic1:aci> ls
admin  fabric  l4-l7-services  system  tenants  vm-networking
admin@apic1:aci>

Thus what you see in the GUI is nothing but the directories maintained by the APIC controller and the output of various sections under each menu is the files maintained in these folders respectively.

Now, in order to verify the fabric that is registered, we can use the command "acidiag fnvread". This command will show us how many leaf nodes and how many spine nodes are learnt and what are their respective IP Addresses. 

Output on APIC:
==========
admin@apic1:aci> acidiag fnvread
      ID             Name    Serial Number         IP Address    Role        State   LastUpdMsgId
-------------------------------------------------------------------------------------------------
     101          CGTLeaf        TEP-1-101       10.0.8.95/32    leaf       active   0
     102         CGTLeaf2        TEP-1-102       10.0.8.93/32    leaf       active   0
     110         CGTSpine        TEP-1-103       10.0.8.94/32   spine       active   0

Total 3 nodes

admin@apic1:aci>

The IP address that we see above is automatically allocated from the subnet that we assign during the initial setup of the APIC Controller. These are nothing but the loopback addresses on each of these nodes respectively. If you type help after the command acidiag, you can see the various options available with the command. This command allows you to do multiple things with the fabric.

Output on APIC:
==========
admin@apic1:aci> acidiag help
usage: acidiag [-h] [-v]
               
               {rvread,crashsuspecttracker,stop,reboot,fnvreadex,start,version,rvreadle,verifyapic,avread,touch,installer,dbgtoken,fnvread,restart}
               ...
acidiag: error: invalid choice: 'help' (choose from 'rvread', 'crashsuspecttracker', 'stop', 'reboot', 'fnvreadex', 'start', 'version', 'rvreadle', 'verifyapic', 'avread', 'touch', 'installer', 'dbgtoken', 'fnvread', 'restart')
admin@apic1:aci>

In the above output, we can see that there are options to stasrt, stop, reboot the nodes and perform other actions with it as well. You can see more details on the fabric, once you go to the fabric/inventory/pod-1 directory.

Output on APIC:
==========
admin@apic1:aci> ls
admin  fabric  l4-l7-services  system  tenants  vm-networking
admin@apic1:aci> cd fabric/
admin@apic1:fabric> ls
access-policies  fabric-policies  inventory
admin@apic1:fabric> cd inventory/
admin@apic1:inventory> ls
fabric-membership  pod-1  unmanaged-fabric-nodes  unreachable-nodes
admin@apic1:inventory> cd pod-1/
admin@apic1:pod-1> ls
CGTLeaf  CGTLeaf2  CGTSpine  mo  node-101  node-102  node-110  summary  tags  troubleshooting
admin@apic1:pod-1> cat summary 
# pod
id : 1

spines:
id   name      infrastructure-ip  in-band-management-ip  out-of-band-management-  up-time          status    
                                                         ip                                                  
---  --------  -----------------  ---------------------  -----------------------  ---------------  ----------
110  CGTSpine  10.0.8.94          0.0.0.0                0.0.0.0                  00:22:35:21.000  in-service

leaves:
id   name      infrastructure-ip  in-band-management-ip  out-of-band-management-  up-time          status    
                                                         ip                                                  
---  --------  -----------------  ---------------------  -----------------------  ---------------  ----------
101  CGTLeaf   10.0.8.95          0.0.0.0                0.0.0.0                  00:22:35:22.000  in-service
102  CGTLeaf2  10.0.8.93          0.0.0.0                0.0.0.0                  00:22:35:22.000  in-service

tags:
name
----
admin@apic1:pod-1>

Here we can see that there is no In-Band or Out-Of-Band mgmt IP. Though there is IP for the infrastructure. We can also see the nodes in the fabric and their respective status like we saw in the acidiag fnvread command. Now if we go back and check the fabric-membership directory, we can see the hardware model being used for the Spine and the leaf nodes.

Output on APIC:
==========
admin@apic1:pod-1> cd ..
admin@apic1:inventory> ls
fabric-membership  pod-1  unmanaged-fabric-nodes  unreachable-nodes
admin@apic1:inventory> cd fabric-membership/
admin@apic1:fabric-membership> ls
clients  node-policies
admin@apic1:fabric-membership> cd clients/
admin@apic1:clients> ls
summary  [TEP-1-101]  [TEP-1-102]  [TEP-1-103]
admin@apic1:clients> cat summary 
clients:
serial-number  node-id  node-name  model        role   ip            decomissioned  supported-model
-------------  -------  ---------  -----------  -----  ------------  -------------  ---------------
TEP-1-101      101      CGTLeaf    N9K-C9396PX  leaf   10.0.8.95/32  no             yes            
TEP-1-102      102      CGTLeaf2   N9K-C9396PX  leaf   10.0.8.93/32  no             yes            
TEP-1-103      110      CGTSpine   N9K-C9508    spine  10.0.8.94/32  no             yes            
admin@apic1:clients>

We can see from the above output, that for the Spine we are using the Nexus95xx chassis where as we are using Nexus93xx chassis for the leaf nodes. The other directories that you notice above ([TEP-1-101]. . . ) contains nothing but the summary of the output we see above. Now if you go back the pod-1 directory and you go into a particular node say for Spine (CGTSpine), we can see various other directories like protocols, interfaces, chassis, etc. If we go to the interface directory, we can see all the 40G ports and also see which cone's are connected and UP.

Output on APIC:
==========
admin@apic1:[TEP-1-101]> cd ..
admin@apic1:clients> cd ../..
admin@apic1:inventory> ls
fabric-membership  pod-1  unmanaged-fabric-nodes  unreachable-nodes
admin@apic1:inventory> cd pod-1/
admin@apic1:pod-1> ls
CGTLeaf  CGTLeaf2  CGTSpine  mo  node-101  node-102  node-110  summary  tags  troubleshooting
admin@apic1:pod-1> cd CGTP
bash: cd: CGTP: No such file or directory
admin@apic1:pod-1> cd CGTSpine/
admin@apic1:CGTSpine> ls
chassis  fabric-extenders  interfaces  mo  processes  protocols  span-sessions  summary  tags  troubleshooting
admin@apic1:CGTSpine> cd interfaces/
ladmin@apic1:interfaces> ls
aggregated-interfaces           management-interfaces  routed-loopback-interfaces  tunnel-interfaces
encapsulated-routed-interfaces  physical-interfaces    routed-vlan-interfaces      vpc-interfaces
admin@apic1:interfaces> cd physical-interfaces/
admin@apic1:physical-interfaces> cat summary 
physical-interfaces:
interface  speed    layer   mode   cfg-access-vlan  cfg-native-vlan  bundle-index  oper-duplex  oper-state  oper-state-reason  switching-state
---------  -------  ------  -----  ---------------  ---------------  ------------  -----------  ----------  -----------------  ---------------
eth5/1     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         up          link-up-connected  enabled        
eth5/2     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         up          link-up-connected  enabled        
eth5/3     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         down        disabled           disabled       
eth5/4     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         down        disabled           disabled       
eth5/5     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         down        disabled           disabled       
eth5/6     40-gbps  routed  trunk  unknown          unknown          unspecified   auto         down        disabled           disabled       

. . . . .Output Truncated . . . . .

The two links that we see above that are in UP state, are the one's connected to each leaf nodes - CGTLeaf and CGTLeaf2.

 Since we all know that ISIS is run between the fabric (ACI provides L2 over L3 domain), we can check that under the protocols directory.

Output on APIC:
==========
admin@apic1:CGTSpine> cd protocols/
lsadmin@apic1:protocols> ls
arp  bgp  cdp  coop  dhcp  igmp  ip-interfaces  ip-routes  ip-static-routes  isis  lacp  lldp  ospf
admin@apic1:protocols> cd isis/
ksadmin@apic1:isis> ls
overlay-1  summary
admin@apic1:isis>
admin@apic1:isis> cat summary 
isis:
name       sys-id             area-id  protocol  mode    oper-state
---------  -----------------  -------  --------  ------  ----------
overlay-1  6E:00:00:00:00:00  1        ip        fabric  unknown   
admin@apic1:isis> cd overlay-1/
ls
admin@apic1:overlay-1> ls
discovered-tunnel-endpoints  is-is-fabric-multicast-trees  is-is-interfaces  is-is-routes  lsp-records  neighbors  summary  traffic
admin@apic1:overlay-1> cat summary 
# is-is-fabric-multicast-tree
name                         : overlay-1
sys-id                       : 6E:00:00:00:00:00
area-id                      : 1
protocol                     : ip
mode                         : fabric
oper-state                   : unknown
max-ecmp-size                : 12
metric-style                 : narrow
mtu                          : 1492
context-id                   : 0
fast-csnps-sent              : 0
fast-lsps-sent               : 0
total-lsp-purged             : 0
total-lsp-refreshed          : 0
total-lsp-sourced            : 0
mts-error                    : no
sequence-wrap-error          : no
total-spf-calculations       : 0
urib-error                   : no
overload-state               : on-at-bootup
overload-after-startup-time  : 00:00:10:00.000
admin@apic1:overlay-1> cd is-is-interfaces/
ls
admin@apic1:is-is-interfaces> ls
eth5:1.8  eth5:2.35  lo0  lo1  lo2  lo3  lo4  summary
admin@apic1:is-is-interfaces> cat summary 
is-is-interfaces:
id         admin-state  circuit-type  control             protocol-state
---------  -----------  ------------  ------------------  --------------
lo0        enabled      l1-is-type    advert-tep,passive                
eth5/2.35  enabled      l1-is-type                                      
eth5/1.8   enabled      l1-is-type                                      
lo1        enabled      l1-is-type    advert-tep,passive                
lo2        enabled      l1-is-type    advert-tep,passive                
lo3        enabled      l1-is-type                                      
lo4        enabled      l1-is-type                                      
admin@apic1:is-is-interfaces>

Thus from the above output, we can see that all the interfaces that are participating in ISIS. Note that none of this was configured. It is all automatic. The ACI fabric does the automatic detection using the LLDP protocol. We can go to the LLDP directory and check the neighbors:

Output:
=====
admin@apic1:neighbors> pwd
/home/admin/aci/fabric/inventory/pod-1/CGTSpine/protocols/lldp/neighbors
admin@apic1:neighbors> cat summary 
neighbors:
device-id  local-interface  hold-time  capability  port-id          
---------  ---------------  ---------  ----------  -----------------
CGTLeaf    eth5/1           120        router      92:7e:b3:07:07:72
CGTLeaf2   eth5/2           120        router      92:a0:ee:09:cf:b1
admin@apic1:neighbors>

Thus we can see both the LEAF nodes as the neighbors to this Spine node.

There is a lot more that we can do from the APIC CLI. We shall cover that in another post. Once you get to work on the APIC Controller, you can get your hands on it and try surfing through various directories and play around.

Hope this post was helpful.

Feel free to reach out to me for any further queries.

Cheers...!!!

Genie
www.codergenie.com 

Comments are closed