Postman is a great tool to test APIs, we can make a group of API calls known as collection in Postman. The collection could also be run from CLI via Newman.
Postman has a lot of good documentation to start and test the API function. The ArubaOS 8.x though provide the swagger interface which could be directly used to test and run APIs, however the collection function on Postman provides good automation function.
I will use Postman to take a tech support logs and export it to external tftp/ftp server.
Params correspond to the request parameters that are appended to the request URL.they are most used with GET requests. On the other hand, Body is the actual request body (usually it defines the request payload or the data needs to be processed by the request). PUT and POST requests are usually read the data from the body of the request and not from the params.
Login to Aruba Controller via Postman:
The Aruba API Guide provides good documentation on the APIs available on the 8.x version. You can also goto the Swagger Interface for the device by logging to the MM and going to the url: https://<controller-ip>:4343/api
Please note that the port number 4343 should be open on the FW and/or allowed on the controller.
I will use the post function in postman with the username and password keys in the body of the put request to login, a successfull authentication will return a UIDARUBA which can be used as a token for all the subsequent GET/SET requests after the login attempt.
In the above example I have used collection variables to enter username and password as a variable to the post function. Postman also provides option to write test scripts to capture responses and validate the success/failure of the function. We also used this script to capture the value of UIDARUBA from the response and set it as a variable for subsequent functions.
The variables are saved as Collection variables:
I am running another request in the Postman Collection to create a logs.tar file on the Aruba Controller. The Post Params and body content requirement can be captured from the controller swagger interface.
Please set the body content format as JSON.
The final Post request would be to tftp tranfer the logs.tar.7z file from the flash: to the tftp Server.
The review of the MM audit-trail logs did confirmed the command pushed for generating the logs.tar file and the tftp transfer.
The support of certificate base login to the MM/MD provide a good security capability of the Aruba MM/MDs.
Aruba still don’t support importing the public key directly and thus your SSH public key can only be imported into the Aruba controller using an X509 certificate. Therefore we first need to create a certificate include you public and private key.
I would use openssl to create a public/private key pair and also to generate a final Self-Signed Cert for uploading to the Aruba Controller.
I would use a Window laptop and use putty for ssh.
Installing OpenSSL on Windows Machine:
Thanks to Shining Light Production for providing an installer version of OpenSSL for the Windows Machine:
We need to upload the Certificate to the Aruba Controller as a Public Certificate, browse and upload the Certificate:
Adding Mgmt Account Associated to the Cert:
Create the mgmt account with required access and associate the Certificate to the mgmt-account.
Using Putty to Login to the Controller:
We need to associate the private key on the PC to login to the Controller. However Putty requires the key to be in a specific format else it would give error as follows:
We would download Puttygen to convert the Private key in the format as suited to Putty.
Open Puttygen and use the load option to load the previously created private key. Once loaded Save the private key in the .ppk format using the Save Private Key option, also better to add the keypassphrase during saving.
Now that the Private key is converted to correct format, load the private key again to putty and ssh to the Aruba Controller. Find the following on how/where to load the Private Key in Putty:
We would be prompted to username and the Passphrase that you set during the key import. Enter the username that we created on Aruba Controller and enter the passphrase set during the private key import and you should login fine.
More details on how to setup it up on the Aruba 6.x Architecture:
Almost all vendors implement the DHCP Option 60 RFC(2132) Vendor Class Identifier in their own way. The DHCP Option 60 is a string that the Access Point includes in the DHCP Discovery packet to the DHCP Server.
A DHCP server can be configured to filter on received option 60 string values and forward standard or vendor specific options (Option 43) in DHCP offer and acknowledgement packets. Filtering using option 60 allows different types of devices that require vendor-specific information to co-exist in a common broadcast domain. (Having Cisco and Aruba AP in the same subnet, the DHCP Server should be configured with DHCP Option 43 and 60 for each Vendor).
If you do not specify an option 60 for some scope, the content of option 43 is returned to any DHCP client asking for an IP address in that subnet. In general we should try to define it in the DHCP scope as it makes sure that option 43 is returned only to APs and not other clients but it also depends upon the type of the DHCP server. For example Cisco IOS based DHCP scopes allow only one option 60 string (VCI) per scope , So you may not want to use it if you have different Vendor APs in the same subnet using IOS based DHCP.There is no such limitation on the Windows Server and hence the correct procedure is to define Option 60 for each Vendor.
The string value that is forwarded to the DHCP server is dependent on the Aruba Access Points mode.
Aruba Access Points also requires specific option 60 values to be returned in DHCP offer and acknowledgement packets for vendor specific information to be considered. The expected option 60 value being dependent on the mode of the Access Points. If the expected option 60 value is not present in the DHCP offer or acknowledgement packet, any supplied vendor specific information is ignored.
Vendor Specific Information (Option 43):
Aruba Access Points support vendor-specific information that can be provided in offer and acknowledgement packets. The type of vendor-specific information that is supported by an Aruba Access Point is dependent on the mode. For example Instant mode Access Points can be supplied with HTTP Proxy Server (Option 148) and/or (Option 43) AirWave Server information while Campus mode or Unified Access Points can be supplied Mobility Controller information.
AirWave Server Discovery For IAP:
HTTP Proxy For IAP:
******Please note that both the username and password are forwarded to Instant mode Access Points in offers and acknowledgements in clear text. ******
The HTTP Proxy option can be used with Instant mode Access Points that are managed by AirWave or Central. When managed by AirWave, the HTTP Proxy option can be combined with the AirWave Server Discovery option.
Controller Discovery For Campus/Unified AP:
Lets see the DHCP Scope Configuration for IAP and Campus/Unified APs.
The Aruba Mobility Controllers support the SCP Server feature, where you can enable SCP Service on the MMs and MDs. This feature helps to transfer files between MM and MDs without any external SCP Server requirement and also helps to transfer files to and from any device running SCP client.
The command to enable/disable the service:
(Test)[MDC] (config)#service scp (Test)[MDC] (config)#no service scp
This feature however is not a full featured SCP support. Its only supports the native SCP protocol, SFTP/WinSCP protocols are not supported. If you try to connect to it via the WinSCP GUI interface it would not work, however you can use the SCP cli command to upload/download files to the Mobility Controllers.
If you have any SCP client installed on your device, you can use the Command line to download upload the file.
Following example where I am using the command line interface of my windows laptop to download the logs.tar file from the Aruba Controller. Here x.x.x.x is the controller ip, I need to type the root admin password when prompted before the transfer will start.
C:\Users\admin>scp email@example.com:logs.tar.7z C:\Users\admin The authenticity of host ‘x.x.x.x (x.x.x.x)’ can’t be established. RSA key fingerprint is SHA256:kig0wBq0xYQKZsSi/C1zvTs9eGaDXj920VjuMLxdX38. Are you sure you want to continue connecting (yes/no)? Warning: Permanently added ‘x.x.x.x’ (RSA) to the list of known hosts. firstname.lastname@example.org’s password: logs.tar.7z 100% 19MB 18.8MB/s 00:01
Similarly I can use the CLI command to upload a file to the Aruba Controller. In the following command I am upload a TEST.txt file to the Aruba Mobility Controller with IP x.x.x.x. I would be prompted for the admin password of the Controller.
Many a times there are situation/requirement where we would like to have control on what MD, the APs and clients should connect. This might be needed for some validation test or other use cases.
The Cluster feature does allow option to disable loadbalancing for the APs, use ap-move command, this gives you control of pointing a specific AP to a specific controller via the LMS or static configuration. However there is no straight forward method to control the clients, the Cluster feature does not have an options to disable loadbalancing feature for the clients. Also there is no client-move command as available for the APs.
There is however a method via which you can define what UAC should a station/client index be assigned. Thus giving you control of defining the UAC for a specific client/station index.
******Please note this is not a recommended practice and should not be performed regularly. This command is also disruptive and would impact other clients as well********
This method moves the client from one UAC to another, but unfortunately, it changes the UAC and Standby UAC of all clients in the same bucket index, not a single client, because UACs and Standby UACs of clients follow the bucket index.
Following are the steps:
Identify the bucket index / station index for your test client.
(Test)[MDC] #cluster-debug calc-sta-uac 5a:10:5e:5a:21:c9 TEST STA Index:178 STA A-UAC:10.1.2.36 STA S-UAC:10.1.2.35
The above command gives the bucket index / station index 178 for the client mac 5a:10:5e:5a21:c9 connected to the TEST SSID.
2. Identify the UAC ID assigned to the MD where you want to move the test Client.
(Test) [MDC] #show aaa cluster essid-all bucketmap Bucket map for Test, Rcvd at : Tue Sep 22 08:41:14 2020 Item Value —- —– Essid Test UAC0 10.17.10.10 UAC1 10.17.10.11
From the above you can see for the Test SSID, MD 10.17.10.10 is assigned ID of 0 and MD 10.17.10.11 is assigned ID of 1.
3. Identify the Cluster Leader.
From my experience I have noticed the cluster leader is assigned the UAC ID of 0, however it always better to confirm.
The Aruba 8.x architecture introduced a new feature of Cluster. The Clustering is a combination of multiple managed devices (MDs) working together to provide high availability to all clients, ensuring service continuity and load-balancing feature.
The clustering feature provides : Seamless Roaming Client Stateful Switchover AP and Client Loadbalancing
Cluster bucketmap/mapping table:
The Aruba Controller creates a mapping table called the bucketmap. The mapping table/bucketmap is table that provides the mapping of the station/client index to the UAC(User Anchor Controller). This table is pre-populated even before any client/user is connected to the Aruba AP.
The bucketmap is per ssid basis, each ssid has its own bucket map. The output below shows the two bucketmap for the SSID: Test and LAB_TEST
As you can see the bucket map include variables like: SSID, UAC. Thus this bucketmap would not change until change in cluster size or any new SSID added. We can pretty much say the bucketmap is permanent and this is what allows each client to have the same UAC wherever/to any AP it connect in the network.
Once the bucketmap is computed, the Cluster leader pushes the mapping table to all the APs.
Now that the AP has this mapping table, the AP uses this to decide what should be the UAC for the client that has connected to it. When a user connects to an AP, the AP runs the hashing algorithm on the user mac address ( the algorithm uses the last 3 bytes of the client mac address) and spits out a Station index. This station index is a value between 0 – 255 and is referred against the mapping table/bucketmap to decide the UAC for the client.
Following diagram gives a brief of the process.
In the output below you see the details for the user with mac address: 20:4c:03:62:aa:d3 and IP address 10.17.11.90.
The station/client index computed for the client is: 27. The station index was referred against the bucketmap and identified that 10.17.10.10 was the UAC for this station/client.
When the cluster is created, the cluster leader builds a bucket map for each ESSID that is part of the cluster. The cluster leader takes the total number of cluster members and assigns a number to each one, starting with 00, 01, 02… for however many cluster MCs there are. The cluster leader then creates a bucket map for one of the ESSIDS and distributes the numbers across the bucket map. Think of the bucket map as simply two lookup tables for each ESSID with 256 positions in each lookup table. The first tables is the active or UAC pointers and the 2nd table is the standby or S-UAC pointers. (This is done for each ESSID)
When the maps are created for each ESSID, they are sent to the APs. This map will be the same, as long as you do not add an MC or remove an MC from the cluster. So the APs hold the maps. When a user tries to connect to an ESSID, the AP performs a hash on the extended unique identifier (last 6 hex digits in the MAC address) of the user MAC address, and generates an ASCII number between 0-255.
Once the AP has generated the hash for the client, it simply uses the ASCII hash value and looks up that position in both the active and standby tables in the bucket map, finds the controller number 00,01,.. and cross references it to the IP address of the MC and establishes the UAC and S-UAC tunnels for the user.
Iperf is a handy tool to measure the bandwidth and the quality of a network link. It is a commonly used network testing tool that can create Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) data streams and measure the throughput of a network that is carrying them.
Iperf allows the users to vary various parameters that can be used for testing the network, or alternatively for optimizing and tuning a network. Iperf has a client and server functionality, and can measure the throughput between the two ends, either unidirectionally or bi-directionally.
Iperf can be installed very easily on any Linux or Microsoft Windows system, where one host can be configured as a client, the other one as server.
Setup required for running the iperf test:
1. Download the iperf setup, you can download it from: https://iperf.fr/ 2. Copy the setup file on the two hosts you would be using to perform the test. 3. Set one host in the server mode and the other in the client mode with the following syntax:
To set the host in server mode use the command : iperf -s
C:\IOS Images\iperf-2.0.5-2-win32>iperf -s ———————————————————— Server listening on TCP port 5001 TCP window size: 64.0 KByte (default) ————————————————————
To set the client in client mode use the command : iperf -c <server ip address>
C:\IOS Images\iperf-2.0.5-2-win32>iperf -c 192.168.1.5 // Where 192.168.1.5 is server ip address.
Client/Server: -f, –format [kmKM] format to report: Kbits, Mbits, KBytes, MBytes -i, –interval # seconds between periodic bandwidth reports -l, –len #[KM] length of buffer to read or write (default 8 KB) -m, –print_mss print TCP maximum segment size (MTU – TCP/IP header) -o, –output output the report or error message to this specified file -p, –port # server port to listen on/connect to -u, –udp use UDP rather than TCP -w, –window #[KM] TCP window size (socket buffer size) -B, –bind bind to , an interface or multicast address -C, –compatibility for use with older versions does not sent extra msgs -M, –mss # set TCP maximum segment size (MTU – 40 bytes) -N, –nodelay set TCP no delay, disabling Nagle’s Algorithm -V, –IPv6Version Set the domain to IPv6
Server specific: -s, –server run in server mode -U, –single_udp run in single threaded UDP mode -D, –daemon run the server as a daemon
Client specific: -b, –bandwidth #[KM] for UDP, bandwidth to send at in bits/sec (default 1 Mbit/sec, implies -u) -c, –client run in client mode, connecting to -d, –dualtest Do a bidirectional test simultaneously -n, –num #[KM] number of bytes to transmit (instead of -t) -r, –tradeoff Do a bidirectional test individually -t, –time # time in seconds to transmit for (default 10 secs) -F, –fileinput input the data to be transmitted from a file -I, –stdin input the data to be transmitted from stdin -L, –listenport # port to receive bidirectional tests back on -P, –parallel # number of parallel client threads to run -T, –ttl # time-to-live, for multicast (default 1) -Z, –linux-congestion set TCP congestion control algorithm (Linux only)
Miscellaneous: -x, –reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports -y, –reportstyle C report as a Comma-Separated Values -h, –help print this message and quit -v, –version print version information and quit
[KM] Indicates options that support a K or M suffix for kilo- or mega-
Use the syntax with some additional parameters ” iperf.exe – c -P 10 -w 1000k ” ( -P refers to the number of parallel TCP streams and –w referes to the TCP window size )
Many a times we see that the CA (Third Party Certificate Authority) does not provide a chained cert rather they provide a signed Server Cert and might provide us the Intermediate CA cert and the Root CA cert separately.
In couple of cases they just provide you a signed Server Cert and might expect you to download the Intermediate cert and the Root cert and chain the final cert if required and use it. Many vendor devices do not support an unchained Server cert and they expect you to get a chained Server cert before it could uploaded to the device.
Lets see how we can generate a chained cert from an unchained certificate. I’ll use the following server cert as an example.
The above cert is a Server cert issue by “Go Daddy” well known CA. However the certificate is not chained, if you open the certificate in notepad you’ll find that it is just a Server cert.
For generating a chained cert you need to append the Server cert with the Intermediate CA cert and the Root CA cert. In our case “Go Daddy Secure Certificate Authority” is the Intermediate CA and “Go Daddy Class 2 Certificate Authority” is the Root CA.
The way you need to append the file is, you need to keep the Server cert on top, followed by Intermediate CA cert and then the Root CA cert i.e it is just the opposite as it is show in the Certificate Path on the server Cert. Open all the certificates in notepad, also open a blank notepad and copy paste the Server cert, followed by Intermediate cert and then the Root cert and save this as a final cert which should be ready to be uploaded to the device.
—–BEGIN CERTIFICATE—– Server Cert —–END CERTIFICATE———-BEGIN CERTIFICATE—– Intermediate CA Cert —–END CERTIFICATE———-BEGIN CERTIFICATE—– Root CA Cert —–END CERTIFICATE—–
All the certificates on windows 7 are stored in the windows register and not in any specific folder. You can view the certificates using the cert manager (Type certmgr.msc and it will bring up the following window).
For Mac users the certificates are stored in Keychain Access (In the Finder, open Utilities and then open Keychain Access.) These are the repositories where all the certificates are stored and referenced to check if any certificate is valid or not i.e the Certificate Authority is a Trusted Root CA or not. There are chances that the Intermediate CA certificate may have expired which will cause the entire certificate to go invalid (untrusted). In a recent incident DigiCert’s Intermediate Certificate expired, which caused multiple users to get the untrusted certificate error.
The expired certificate in question was the “DigiCert High Assurance EV Root CA” [Expiration July 26, 2014] certificate. This temporary intermediate certificate was used in years past as part of a compatibility chain for older devices.The problem was related to the locally installed legacy intermediate certificate that was no longer used and no longer required for the certificate installation. This certificate was not been used for over three years and was unnecessary for installations, however the device having issues were not updated. The users affected appear to have the expired intermediate in the ‘login’ keychain or stored locally on their server or in have the expired intermediate installed on a backend server or application.
DigiCert fixed the issue for the customer’s by getting the old cert removed from their machines and new valid Intermediate cert updated on these devices.
How to create the chained cert when the Root CA cert and Intermediate CA cert is not provided the CA:
Usually your CA will provide you the Intermediate CA cert and the Root CA cert or the steps to get them from their Website. However if this is not the case for you and if these are some well known CA’s we should already have their Intermediate and Root cert on your laptop in the registry or the Keychain Access. Lets see how we can get the Intermediate and the Root CA certificate.
Click on the Server cert to open it. Goto the “Certificate path” click on the Intermediate Certificate for your test certificate it is “Go Daddy Secure Certificate Authority”
Click on View Certificate on the lower right corner, which will open up the Intermediate CA cert. Now we want to export this cert so that we can use the cert for chaining. Goto the Details tab for the certificate.
Click on Copy to File, which should open up the export Wizard.
Click Next > Choose the format : ” Base-64 encoded x.59″
Click on Next > Browse and give a name to the file. (Remember this is the Intermediate CA cert so save it some where on your laptop and give it a name like intermediatecert). Click Next and Finish. This will successfully export the Intermediate CA cert on you desktop, now repeat the same process to get the Root CA cert exported on your desktop you click on the Root CA cert in the server or the Intermediate CA cert.
Once you have successfully exported both the Intermediate and the Root CA cert you can open them in notepad and append the Server cert as we already discussed initially.
The certificates are stored in the registry at: HKLM/Software/Microsoft/SystemCertificates
Personal certificates, or other certificates specific to the logged in user are at: HKCU/Software/Microsoft/SystemCertificates
They are stored as binary blobs, so they need to be decoded, and the MMC plugin is a good way to do this.
Many a times when information is not polled correctly on Cisco PI, from your WLC or any other added devices, you would like to check if the device is responding to SNMP queries send by the Cisco Prime or not.
SNMP walk would be good test to check if are getting any SNMP response from the managed devices. Following would be the syntax for the Snmpv2 and Snmpv3 for doing an snmp walk from your Cisco Prime.
You need to have root access to run the snmpwalk on the Cisco Prime.