Image source: infohub.delltechnologies.com/section-assets/powerflex-vrops-infographics |
Consolidating all my blogs on Dell EMC PowerFlex Management Pack for vROps 8.x.
Image source: infohub.delltechnologies.com/section-assets/powerflex-vrops-infographics |
Consolidating all my blogs on Dell EMC PowerFlex Management Pack for vROps 8.x.
In the previous post, we went through basic NSX-T configurations. The next step is to deploy Edge VMs and create an Edge cluster. Before creating Edge VMs, we need to create two trunk port groups on the VDS using the VCSA web UI. Uplinks of the Edge VMs will be connected to these two trunk port groups.
In this post, we will take a look at creating custom alerts for PowerFlex by adding symptom definitions and alert definitions. Refer to my previous blog post to understand more about the alerting aspects in vROps. Here we will take an example scenario and see how we can create custom symptom definitions and alert definitions.
Scenario
In this post, we will take a look at the different configuration steps that are required before enabling workload management to establish connectivity to the supervisor cluster, supervisor namespaces, and all objects that run inside the namespaces, such as vSphere pods, and Tanzu Kubernetes Clusters.
At this point, I assume that the vSAN cluster is up and NSX-T 3.0 is installed. NSX-T appliance is connected to the same management network where the VCSA and ESXi nodes are connected. In my case, it will be through VLAN 41. Note that all the ESXi nodes of the vSAN cluster are connected to one vSphere Distributed Switch and has two uplinks from each node that connects to TOR A and TOR B.
In this blog series, I would like to share my experience deploying vSphere with Tanzu using NSX-T 3.0 networking. Following is a very high-level workflow of setting up the environment from scratch:
In this post, we will take a look at modifying the collection interval of PowerFlex Adapter instances. The PowerFlex Management Pack for vROps supports 4 instance types.
Note: In the product guide it is recommended to configure not more than 40 Cisco switches in one PowerFlex Networking instance. So, if you have 80 switches in your PowerFlex system, you will need to configure 2 PowerFlex Networking instances where each instance will connect/ query/ collect details from 40 switches. This is based on the default collection interval of 5 minutes.
This simply means, in 5 minutes one PowerFlex Networking adapter instance can complete the collection from a max of 40 switches only. So, in 1 minute, it can complete the collection of a maximum of 8 switches. This is a rough calculation and it depends on factors like REST API response, switch firmware/ OS version, etc. So if you change the default interval, always make sure to monitor it (the collection cycle) for some time and verify whether the collection process is able to complete successfully within the new time interval.
Hope it was useful. Cheers!
Benchmarking of IT infrastructure is standard practice and is usually done before putting it into a production environment. It gives you baseline values about different performance aspects of the system/ solution under test. These benchmarking principles are applicable for Kubernetes clusters too. But the test cases and evaluation criteria may slightly vary compared to benchmarking a traditional IT infrastructure.
Following are some of the test considerations: