Hi there,
I've been troubleshooting a bunch of servers that have a single 10GB connection each, as when moving VMs around using cold migration, the transfer speeds are between 400-500mbit. I've got SSDs in RAID-0 for testing on all these hosts that can easily do 12GB/sec (1000MB+) reads/writes so I don't have storage as a bottleneck. Using vMotion I can get the VMs moved at around 6-7Gbit, but when using cold migration, it doesn't go further than 400-500mbit, even though it is going to the same storage, same network and same physical wire/switch/NIC. I've tested across 5 different hosts, ranging from Dell R515, to R710 and R720XD, all of them with decent RAID controllers. For some reason it seems ESXi artificiality limits the network speed for cold migration (as for testing I've got one NIC on each server and they are for both management and vMotion), as the graphs never spike up or down, it is very much flat.
I checked alreayd the virtual switches and nothing is using limits. I don't know where else to look at. Also I tested SMB traffic in between VMs hosted in different hosts, connected physically by the 10GB network and I can get 6-7Gbit on file transfers, so it mean it works for everything, but cold migration. I had this problem in the past where out of nowwhere a 1GB connection would not go faster then 300mbit for example, no matter what you do. Then out of now where it would reach 1GB speeds. No reason why!
I'm using an Intel x540-t1 PCI-e card on each server.
Server have 128GB of RAM, all fully updates firmware wise, running ESXi 5.5 build 2068190.
I've also tested Veeam using the quick migration force mode, which I suppose is similar to the fastscp client in the past. Pretty much same result, around 500mbit.
Any suggestions how to speed up cold migration?
Thanks!