A scenario where the entire network traffic which is going inside the HP blades and HP VC in a HP Blade system C7000
Hardware Details
- HP Enclosure = BladeSystem C7000 Enclosure G2
- HP Blades = BL680c G7
- HP Virtual Connect Flex Fabric
- Dual Port FlexFabric 10Gb Converged Network Adapter
- Dual Port FlexFabric 10Gb Network Adapter
Network Traffic Details
- VMware Vcenter
- FT(Fault Tolerance) and VMotion
The above network traffic classification, to be separated due to the huge network load and considering
security aspect also. This is one of the scenario designing VSphere 5 with HP 3PAR and HP C-Class Blade Center
The Network Traffic Design
More details -
Each blade is having 3 x Dual port 10Gig FlexFabric adapter on board. So total there is 6 x 10Gig ports, they are called LOM (Lan On Motherboard) ports. That is LOM1 to LOM6, and each LOM is internally further divided in to 4 adapters, and these 4 adapters share a common bandwidth that is they can have maximum of 10G. And we can divide the traffic inside for each LOM, that is the beauty of the FlexFabric adapters.
Here LOM1 to LOM4 are 10G FlexFabric Converged adapters (FCOE), so each LOM have one FC port and this is used for the SAN traffic. LOM5 and LOM6 are normal 10G FlexFabric adapters.
There are 2 HP Virtual Connect (VC) modules in the enclosure, they are connected to the BAY1 and BAY2, for redundancy LOM1, LOM3 and LOM5 is internally connected to the BAY1 and LOM2, LOM4, LOM6 to BAY2. There will one uplink for network and one uplink for FC Switch (SAN) for each VC, this will give the redundancy, HA, and load balancing and both VC are in Active/Active mode. So each traffic will have at least 2 adapters 1 from each LOM, this will give the redundancy, HA, Load balancing etc.
The VC is simply a Layer 2 network device, it wont do the routing.
NOTE-
Here the VMotion and FT traffic flow is happening inside the blade center Back Plane it self, and is not going to the VC or external core switch.
This is a specific scenario, here each blades inside the enclosure is configured together as one ESXi Cluster and so there is no need to do VMotion or FT outside the Blade. Here the advantage is that the VMotion and FT traffic wont overload the VC or core switch.
There are several factors involved in optimizing iSCSI performance when using the software iSCSI initiator
The only area where iSCSI performance can be easily optimized on an ESX host is in the configuration of the network
separate the network traffic
On an ESX server where you have virtual machine traffic, VMotion traffic and iSCSI traffic
- vSwitch1 = for virtual machines
- vSwitch2 = for VMotion
- vSwitch3 = for iSCSI
Segregating and isolating the iSCSI network traffic by VLAN configuration on virtual switch or physical switch
is required to ensure iSCSI data integrity
VMware Multiprotocol Design
Using 8 nics
Vswitch0 - Vmotion and management and some virtual machine port group you can create.
use 2 Pnics, each Pnic to each Pswitch, teaming policy = Route based on the originating port ID
management traffic is less
maximum vmotion speed you will get is around below 150 mb/sec because the vmotion buffer size is 256Kb
so you can accomodate some vm traffic also
Vswitch1 - ISCSI
use 2 Pnics, each Pnic to each Pswitch, teaming policy = Route based on the originating port ID
- 2 vmkernel portgroup, for ISCSI, on the each port group, in the NIC teaming, select the "override Failover Order", select on pnic acitve and other pnic as unused adapters, So each port group will have one dedicated pNICS
- use round robin as mutipathing policy in the esxi and also check the netapp, which multipathing is to be used. IF active/active use RR and if ALUA use Fixed path with Array Preference or MRU, check with the netapp.
- USE JUMBO frames, and set to 6000,
Vswitch2 - NFS
- 2 vmkernel portgroup, for NFS, on the each port group, in the NIC teaming, select the "override Failover Order", select on pnic acitve and other pnic as standby adapters, So each port group will have one dedicated pNICS
create 2 NFS shares, there will be one NFS IP for each controller (you teamed the storage nics), and divide the vms to these nfs datastores, so the entire load will be distributed.
For NFS the vmkernel, will do an one to one mapping between the esx pnic to the storage pnic. So in an instant it can use only one pnic so the max network speed for one path is 1gig
- USE JUMBO frames, and set to 6000,
vSWITCH3 - Virtual machine traffic
use 2 Pnics, each Pnic to each Pswitch, teaming policy = Route based on the originating port ID
and create VM port groups.
Here advantages,
- dedicated nics for NFS and ISCSI so best performance for the entire vm storage
- dedicated nics for VM traffic
1 comment:
I do not understand how VMotion is executed without uplinks?
Post a Comment