Quantcast
Channel: VMware Communities : All Content - ESXi
Viewing all 8313 articles
Browse latest View live

Intel IRIS766 passthru - ESXi6.5

$
0
0

Hi All,

 

I can't seem to get the pass through for a intel iris 655 to work on ESXi 6.5, this is on an Intel NUC.

 

Interestingly the PCI device is showing as "Intel(R) VGA compatible controller" rather than Intel IRIS 655. On a Windows 10 installation it does detect Intel IRIS 655 but give the exclamation mark saying the device isn't working.

 

I've also tried an Ubuntu install that just doesn't seem to start up.

 

I've set the SVGA setting to FALSE and the console is no longer displayed.

 

Any pointers what to try next?


Kickstart File Not Working

$
0
0

Hi. Can anybody explain what is wrong with this kickstart file?  6.7

 

ESXi 6.7 U2 HPE Customized ( Nested ESXi)

 

 

Section 2 Post Installation absolutely not working besides this one

### Set Search Domain

esxcli network ip dns search add --domain=o365.az

 

 

### ESXi Installation Script

### Hostname: LAB-ESXi01A

### Author: M. Buijs

### Date: 2017-08-11

### Tested with: ESXi 6.0 and ESXi 6.5

 

##### Stage 01 - Pre installation:

 

    ### Accept the VMware End User License Agreement

    vmaccepteula

 

    ### Set the root password for the DCUI and Tech Support Mode

    rootpw VMware1!

 

    ### The install media (priority: local / remote / USB)

    install --firstdisk=local --overwritevmfs --novmfsondisk

 

    ### Set the network to DHCP on the first network adapter

    network --bootproto=static --device=vmnic0 --ip=172.18.18.235 --netmask=255.255.255.0 --gateway=172.18.18.254 --nameserver=192.168.126.21,192.168.151.254 --hostname=ESXv6.o365.az --addvmportgroup=0

 

    ### Reboot ESXi Host

    reboot --noeject

 

##### Stage 02 - Post installation:

 

    ### Open busybox and launch commands

    %firstboot --interpreter=busybox

 

   ### Set Search Domain

    esxcli network ip dns search add --domain=o365.az

 

    ### Add second NIC to vSwitch0

    esxcli network vswitch standard uplink add --uplink-name=vmnic1 --vswitch-name=vSwitch0

 

    ###  Disable IPv6 support (reboot is required)

    esxcli network ip set --ipv6-enabled=false

 

    ### Disable CEIP

    esxcli system settings advanced set -o /UserVars/HostClientCEIPOptIn -i 2

 

    ### Enable maintaince mode

    esxcli system maintenanceMode set -e true

 

    ### Reboot

    esxcli system shutdown reboot -d 15 -r "rebooting after ESXi host configuration"

automated install with kickstart file and vmware 6.7 doesn't work, but sometimes it does

$
0
0

I have a relatively simple kickstart file that works always with 6.5

and only "sometimes"  with 6.7.

 

When it doesn't work, it hasn't executed anything after the %firstboot section.

 

kickstart.log file is empty in either case (success or not)

 

weird thing is I have 2 types of install, with 10Gbit nics and with 1Gbit nics.  The 10Gbit nics version works nearly always and the 1Gbit nics version works nearly never.

But in both cases it sometimes works and sometimes not.

 

I already increased the sleep value but that doesn't change anything.

 

it's the typical kickstart file

 

#THE BELOW PART ALWAYS WORKS

#=============================

vmaccepteula

clearpart --firstdisk --overwritevmfs

install --firstdisk --overwritevmfs

rootpw blablabla

network --bootproto=static ---- .... with all optoins

reboot

 

#THE BELOW PART "SOMETIMES" WORKS

#==================================

%firstboot --interpreter=busybox

sleep 30

#do esxcli stuff...

 

 

if only I could find some logs to search through.

Esxcli "Connection error"

$
0
0

I have been running a script in "local.sh" on some of our hosts that does several things, amongst which is a task to bring a host out of maintenance mode.

 

We have been using this script for some years, starting at a time when we were using vSphere 6.5.

 

We are now on 6.7u2 and on one of our systems the script has started playing up.

 

Especially this bit:

 

MaintenanceModeStatus=$(esxcli system maintenanceMode get)

case $MaintenanceModeStatus in

         Enabled)

                 logger -s "AUTO-START : Exiting Maintenance Mode"

                 vim-cmd hostsvc/maintenance_mode_exit

                 if [ $? -ne 0 ]

                 then

                         logger -s "AUTO-START : Maintenance Mode exit failed"

                 fi

                 ;;

         Disabled)

                 logger -s "AUTO-START : Already out of Maintenance Mode"

                 ;;

         *)

                 logger -s "AUTO-START : Invalid MaintenanceMode status - $MaintenanceModeStatus"

esac

 

We have started seeing log entries for an "Invalid MaintenanceMode status" appearing for both hosts that use the script on one of the systems.

$MaintenanceModeStatus is coming back as "Connection error".

 

Putting a sleep delay before this section of the script seems to help, but we would like to understand why.

 

When "local.sh" is run, is possibly the case that not all services in VCSA are fully ready, and calls to get information may come back invalid, empty or unexpected values?

ESXI configuring network

$
0
0

Hello,

I am trying to start my new ESXi 6.7.0 and I can't find any VIBs online for my TP-LINK  tl-wn822n usb adapter. Can anyone guide me to a website where I can find the right drivers?

 

Thank you very much.

ESXi 6.5 - hot plugging a USB device works on one VM but fails on another

$
0
0

Hi all,

 

I have some USB devices that are connected to my ESXi 6.5.0 host. At times, I need to hot plug them into VMs. When I edit one VM, add a USB device, assign the physical device and save, all works well and the device is plugged into the VM. When I do the same with the second VM, saving fails, saying "The attempted operation cannot be performed in the current state (Powered on)."

Both VMs run flavors of Linux x64 and both have tools installed and running (if that matters).

I checked the configs and "devices.hotplug" is not configured for either. I even tried setting "devices.hotplug" to TRUE but that didn't make a difference.

 

What could be the reason for one machine accepting USB hotplug and the other rejecting it?

 

Thanks in advance for any insight!

Trying to update from HPE ESXi 6.5 U2 to U3 - cannot be live installed error

$
0
0

I’m trying to update to U3 of ESXi 6.5, and am getting the following error:

 

[root@Gen8:~] esxcli software vib install -d "/vmfs/volumes/SSD/images/VMware-ESXi-6.5.0-Update3-14990892-HPE-preGen9-650.U3.9.6.10.1-Dec2019-depot.zip"                                                       

  [StatelessError]                                                                                     

The transaction is not supported: VIB VMW_bootbank_vmkusb_0.1-1vmw.650.3.96.13932383 cannot be live installed. VIB HPE_bootbank_oem-build_650.U3.9.6.10-4240417 cannot be live installed. VIB VMW_bootbank_bnxtnet_20.6.101.7-23vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_ixgben_1.7.1.15-1vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_smartpqi_1.0.1.553-28vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_igbn_0.1.1.0-4vmw.650.3.96.13932383 cannot be live installed. VIB VMware_bootbank_esx-base_6.5.0-3.108.14990892 cannot be live installed. VIB VMW_bootbank_lsi-msgpt35_09.00.00.00-5vmw.650.3.96.13932383 cannot be live installed. VIB VMware_bootbank_cpu-microcode_6.5.0-3.108.14990892 cannot be live installed. VIB VMW_bootbank_nvme_1.2.2.28-1vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_ne1000_0.8.3-8vmw.650.2.75.10884925 cannot be live installed. VIB VMW_bootbank_misc-drivers_6.5.0-3.96.13932383 cannot be live installed. VIB VMW_bootbank_ntg3_4.1.3.2-1vmw.650.2.75.10884925 cannot be live installed. VIB VMW_bootbank_nenic_1.0.29.0-1vmw.650.3.96.13932383 cannot be live installed. VIB VMware_bootbank_lsu-lsi-drivers-plugin_1.0.0-1vmw.650.2.79.11925212 cannot be live installed. VIB VMware_bootbank_esx-tboot_6.5.0-3.108.14990892 cannot be live installed. VIB VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-8vmw.650.2.79.11925212 cannot be live installed. VIB VMware_bootbank_vsan_6.5.0-3.108.14833668 cannot be live installed. VIB VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-16vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_i40en_1.8.1.9-2vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_lsi-mr3_7.708.07.00-3vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_lsi-msgpt3_17.00.02.00-1vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_vmw-ahci_1.1.6-1vmw.650.3.96.13932383 cannot be live installed. VIB VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106 cannot be removed live. VIB VMware_bootbank_esx-base_6.5.0-2.57.9298722 cannot be removed live. VIB VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607 cannot be removed live. VIB VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.650.2.50.8294253 cannot be removed live. VIB VMware_bootbank_esx-tboot_6.5.0-2.57.9298722 cannot be removed live. VIB VMW_bootbank_bnxtnet_20.6.101.7-11vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_i40en_1.3.1-19vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_lsi-mr3_7.702.13.00-3vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_vmw-ahci_1.1.1-1vmw.650.2.50.8294253 cannot be removed live. VIB VMware_bootbank_cpu-microcode_6.5.0-2.57.9298722 cannot be removed live. VIB VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607 cannot be removed live. VIB VMware_bootbank_vsan_6.5.0-2.57.9152287 cannot be removed live. VIB VMW_bootbank_ne1000_0.8.3-7vmw.650.2.50.8294253 cannot be removed live. VIB HPE_bootbank_hpe-build_650.U2.9.6.8-4240417 cannot be removed live. VIB VMW_bootbank_nvme_1.2.1.34-1vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_misc-drivers_6.5.0-2.50.8294253 cannot be removed live. VIB VMW_bootbank_smartpqi_1.0.1.553-10vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_lsi-msgpt35_03.00.01.00-9vmw.650.2.50.8294253 cannot be removed live. VIB VMW_bootbank_ixgben_1.4.1-12vmw.650.2.50.8294253 cannot be removed live.       

 

The server is in maintenance mode.

 

ESXi is installed on a USB stick (8gb).

 

From my searching on this problem, the issue may be that my /bootbank or /altbootbank are corrupt.

 

Is there a painless way to fix this?   

VM Template has thick Provisioned Lazy Zeroed small hard disk.

$
0
0

Issue: My VM Template has a thick Provisoned Lazy Zeroed small hard disk.  It is 50 GB.  Can I turn it into a thin provisioned disk and enlarge the size when I make the VM.  If I can't do it then is there a way after the VM.


ESXI is Broken

$
0
0

I Have installed VMware ESXI and had a VM Ubuntu installed. I got frustrated with nit being able to increase the allotted drive space so I deleted the VM and the datastore with the hope of being able to add a new datastore and VM. However, when I try and add a new datastore I get an error that says something like Cannit change host. I can’t add a VM as it needs a datastore but shows no datastores available. I have no idea what to do.

ESXI 6.5 management services hangs after a couple of days (max. 35-40 days)

$
0
0

Hi,

 

First of all, we will explain the cluster we have. We have two Dell R520 with updated 6.5 ESXi Dell images of the hypervisor. We have used a USB key to install the OS of the hypervisor (instead of the Dual SD Card Module - we don't have it installed) in each of the servers.

 

After a couple of days, the same hypervisor is hanging partially. We detect the problem because when we are going to restart a VM, make a VM shutdown or start, or make some operations (as VM Snapshot creation), the task does not complete. - Vsphere HTML5 interfaces shown the task (for example, register VM, at 0%, 10% or a very slow process that completely hangs at 99%). We can't cancel the task, it appears random error message like "operation can not be completed" or directly we can't make click in the cancel icon.

 

We login in the hypervisor web client directly, and when we try to get the VM inventory (only in that tab), we get a very slow response (normally clients disconnects with "connection lost") We have in that hypervisor Veeam B&R in 9.5 U4b version to make the backups of the VMs. For example, today after 35 days we have received alerts that says us backup process can not be completed. The VM that failed was powered on (the guest os worked perfectly) but the vm appears as invalid in the VM list. We have shutdown it, we have remove it from inventory, but when we tried to register again, it has been impossible. More than 60 minutes and the task hangs at 99%.

 

We have connected by SSH to investigate the problem, and we have shown this in logs.

 

vmkwarning.log

 

2020-03-14T11:48:17.907Z cpu31:65616)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:48:40.201Z cpu25:65610)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:49:02.504Z cpu22:65607)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:49:24.804Z cpu22:65607)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:49:47.104Z cpu17:65602)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:50:09.401Z cpu17:65602)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:50:31.697Z cpu31:65616)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:50:53.864Z cpu23:65608)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:51:16.166Z cpu17:65602)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:51:38.464Z cpu19:65604)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:52:00.758Z cpu19:65604)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:52:23.061Z cpu27:65612)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:52:45.361Z cpu24:65609)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T11:53:07.660Z cpu24:65609)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

 

If we try to get information about the device...

 

esxcli storage core device list -d mpx.vmhba32:C0:T0:L0

 

... none. Command hangs and no response from SSH cli.

 

In the other hypervisor (the one that works like a charm), the response is...

 

mpx.vmhba32:C0:T0:L0

   Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

   Has Settable Display Name: false

   Size: 7400

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0

   Vendor: TOSHIBA

   Model: TransMemory    

   Revision: 0100

   SCSI Level: 2

   Is Pseudo: false

   Status: on

   Is RDM Capable: false

   Is Local: true

   Is Removable: true

   Is SSD: false

   Is VVOL PE: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

   VAAI Status: unsupported

   Other UIDs: vml.0000000000766d68626133323a303a30

   Is Shared Clusterwide: false

   Is Local SAS Device: false

   Is SAS: false

   Is USB: true

   Is Boot USB Device: true

   Is Boot Device: true

   Device Max Queue Depth: 1

   No of outstanding IOs with competing worlds: 1

   Drive Type: unknown

   RAID Level: unknown

   Number of Physical Drives: unknown

   Protection Enabled: false

   PI Activated: false

   PI Type: 0

   PI Protection Mask: NO PROTECTION

   Supported Guard Types: NO GUARD SUPPORT

   DIX Enabled: false

   DIX Guard Type: NO GUARD SUPPORT

   Emulated DIX/DIF Enabled: false

 

That id is from the pendrive that has the hypervisor OS. We suppose that ID is the same in both hypervisors...

 

If we look at the faulty hypervisor again... file VMkernel.log

 

in the beginning of the file...

 

2020-03-14T05:41:41.184Z cpu4:611239)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:41:46.356Z cpu2:65587)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:41:46.356Z cpu2:65587)ScsiDeviceIO: 3015: Cmd(0x439510b358c0) 0x28, CmdSN 0x1 from world 67214 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x

2020-03-14T05:42:03.185Z cpu10:65885)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1002ms in the past

2020-03-14T05:42:03.185Z cpu10:65885)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:42:08.627Z cpu2:65587)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:42:08.627Z cpu2:633249)VFAT: 4593: Failed to get object 36 type 2 uuid 5e3d7952-b8e7916b-ebd7-90b11c4405a0 cnum 0 dindex fffffffecdate 0 ctime 0 MS 0 :Storage initiator error

2020-03-14T05:42:15.186Z cpu7:609657)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1002ms in the past

2020-03-14T05:42:19.764Z cpu2:65587)ScsiDeviceIO: 3015: Cmd(0x4395013b4140) 0x9e, CmdSN 0x169c5d from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0

2020-03-14T05:42:20.184Z cpu9:66121)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device mpx.vmhba32:C0:T0:L0 repeated 5 times

2020-03-14T05:42:26.188Z cpu7:609657)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1002ms in the past

2020-03-14T05:42:26.188Z cpu7:609657)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:42:30.901Z cpu2:65587)NMP: nmp_ThrottleLogForDevice:3630: Cmd 0x9e (0x4395013b4140, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Invalid se

2020-03-14T05:42:30.901Z cpu2:65587)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:42:48.185Z cpu14:622426)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1001ms in the past

2020-03-14T05:42:48.185Z cpu14:622426)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:42:53.173Z cpu2:65587)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:42:53.173Z cpu2:65587)ScsiDeviceIO: 3015: Cmd(0x4395013b4140) 0x25, CmdSN 0x169cbe from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0

2020-03-14T05:43:10.184Z cpu2:611807)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1000ms in the past

2020-03-14T05:43:10.184Z cpu2:611807)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:43:15.458Z cpu3:65588)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:43:20.184Z cpu12:66121)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device mpx.vmhba32:C0:T0:L0 repeated 4 times

2020-03-14T05:43:26.608Z cpu3:65588)NMP: nmp_ThrottleLogForDevice:3630: Cmd 0x1a (0x4395013b4140, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Invalid se

2020-03-14T05:43:26.608Z cpu3:65588)ScsiDeviceIO: 3015: Cmd(0x4395013b4140) 0x1a, CmdSN 0x169cbf from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x22 0x20

2020-03-14T05:43:32.184Z cpu1:609329)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1000ms in the past

2020-03-14T05:43:32.184Z cpu1:609329)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:43:37.760Z cpu3:65588)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:43:44.186Z cpu9:622457)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1000ms in the past

2020-03-14T05:43:55.186Z cpu9:622457)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1000ms in the past

2020-03-14T05:43:55.186Z cpu9:622457)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:44:00.056Z cpu3:65588)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:44:00.056Z cpu3:65588)ScsiDeviceIO: 3015: Cmd(0x4395013b4140) 0x28, CmdSN 0x1 from world 67214 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x

2020-03-14T05:44:17.184Z cpu9:611153)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1000ms in the past

2020-03-14T05:44:17.184Z cpu9:611153)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

2020-03-14T05:44:20.184Z cpu3:66121)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device mpx.vmhba32:C0:T0:L0 repeated 4 times

2020-03-14T05:44:22.355Z cpu3:65588)NMP: nmp_ThrottleLogForDevice:3630: Cmd 0x28 (0x4395013b4140, 67214) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Invali

2020-03-14T05:44:22.355Z cpu3:65588)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2020-03-14T05:44:22.355Z cpu3:82429)VFAT: 4593: Failed to get object 36 type 2 uuid 5e3d7952-b8e7916b-ebd7-90b11c4405a0 cnum 0 dindex fffffffecdate 0 ctime 0 MS 0 :Storage initiator error

 

At now...

 

2020-03-14T12:06:52.889Z cpu22:65607)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:06:52.889Z cpu22:65607)ScsiDeviceIO: 3015: Cmd(0x439510aa4bc0) 0x9e, CmdSN 0x16c0dc from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:04.036Z cpu22:65607)ScsiDeviceIO: 3015: Cmd(0x439d0591c040) 0x1a, CmdSN 0x16c0dd from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:05.788Z cpu14:65599)ScsiDeviceIO: 3015: Cmd(0x439510aa4bc0) 0x9e, CmdSN 0x16c0dc from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:11.293Z cpu24:622457)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry               

2020-03-14T12:07:15.080Z cpu14:65599)ScsiDeviceIO: 3015: Cmd(0x439d0591c040) 0x1a, CmdSN 0x16c0dd from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:15.184Z cpu22:65607)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:07:15.184Z cpu22:65607)ScsiDeviceIO: 3015: Cmd(0x43950dd8c380) 0x1a, CmdSN 0x16c0de from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:21.787Z cpu14:65599)ScsiDeviceIO: 3015: Cmd(0x43950dd8c380) 0x1a, CmdSN 0x16c0de from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:26.334Z cpu22:65607)ScsiDeviceIO: 3015: Cmd(0x439510aa4bc0) 0x25, CmdSN 0x16c0df from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:32.293Z cpu16:611807)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry               

2020-03-14T12:07:37.483Z cpu16:65601)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:07:37.483Z cpu16:65601)ScsiDeviceIO: 3015: Cmd(0x43950dd8c380) 0x28, CmdSN 0x1 from world 661564 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:07:50.293Z cpu29:66121)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device mpx.vmhba32:C0:T0:L0 repeated 5 times                                                 

2020-03-14T12:07:53.293Z cpu20:65885)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry                

2020-03-14T12:07:55.081Z cpu4:65589)ScsiDeviceIO: 3015: Cmd(0x439d0591c040) 0x28, CmdSN 0x1 from world 67214 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x

2020-03-14T12:07:55.081Z cpu29:67976)VFAT: 4593: Failed to get object 36 type 2 uuid 5e3d7952-b8e7916b-ebd7-90b11c4405a0 cnum 0 dindex fffffffecdate 0 ctime 0 MS 0 :Timeout                     

2020-03-14T12:07:57.936Z cpu16:65601)NMP: nmp_ThrottleLogForDevice:3630: Cmd 0x28 (0x43950dd8c380, 661564) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Inva

2020-03-14T12:07:57.936Z cpu16:65601)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:08:01.789Z cpu4:65589)ScsiDeviceIO: 3015: Cmd(0x43950dd8c380) 0x28, CmdSN 0x1 from world 661564 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0

2020-03-14T12:08:01.789Z cpu7:661564)VFAT: 4593: Failed to get object 36 type 2 uuid ded6d1e0-dadb0f65-35ce-21b30441be7d cnum 0 dindex fffffffecdate 0 ctime 0 MS 0 :Timeout                     

2020-03-14T12:08:09.082Z cpu26:65611)ScsiDeviceIO: 3015: Cmd(0x439510aa4bc0) 0x1a, CmdSN 0x16c0ea from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:08:15.294Z cpu21:611239)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1001ms in the past                                 

2020-03-14T12:08:15.294Z cpu21:611239)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry               

2020-03-14T12:08:20.234Z cpu26:65611)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:08:20.234Z cpu26:65611)ScsiDeviceIO: 3015: Cmd(0x439d00afa480) 0x9e, CmdSN 0x16c0eb from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:08:37.341Z cpu5:611153)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1047ms in the past                                  

2020-03-14T12:08:37.341Z cpu5:611153)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry                

2020-03-14T12:08:42.874Z cpu26:65611)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...                   

2020-03-14T12:08:42.874Z cpu26:65611)ScsiDeviceIO: 3015: Cmd(0x439510aa4bc0) 0x28, CmdSN 0x1 from world 661533 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2020-03-14T12:08:49.299Z cpu1:609657)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1002ms in the past                                  

2020-03-14T12:08:50.294Z cpu24:66121)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device mpx.vmhba32:C0:T0:L0 repeated 5 times                                                 

2020-03-14T12:08:54.028Z cpu21:65606)NMP: nmp_ThrottleLogForDevice:3630: Cmd 0x25 (0x439d00afa480, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Invalid s

2020-03-14T12:08:54.028Z cpu21:65606)ScsiDeviceIO: 3015: Cmd(0x439d00afa480) 0x25, CmdSN 0x16c0ec from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Invalid sense data: 0x30 0x2

2020-03-14T12:09:00.302Z cpu1:609657)ScsiPath: 5149: Command 0x0 (cmdSN 0x0, World 0) to path vmhba32:C0:T0:L0 timed out: expiry time occurs 1003ms in the past                                  

2020-03-14T12:09:00.302Z cpu1:609657)VMW_SATP_LOCAL: satp_local_updatePath:836: Failed to update path "vmhba32:C0:T0:L0" state. Status=Transient storage condition, suggest retry

 

In syslog.log

 

2020-03-14T12:11:25Z sfcb-vmware_base[67999]: updateHealthStateFromSel(637) Invalid "RecordData" in Instance of root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25" {  PATH: root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25"  RecordID = "25" ;  RecordFormat = "*string CIM_Sensor.DeviceID*uint8[2] IPMI_RecordID*uint8 IPMI_RecordType*uint8[4] IPMI_Timestamp*uint8[2] IPMI_GeneratorID*uint8 IPMI_EvMRev*uint8 IPMI_SensorType*uint8 IPMI_SensorNumber*boolean IPMI_AssertionEvent*uint8 IPMI_EventType*uint8 IPMI_EventData1*uint8 IPMI_EventData2*uint8 IPMI_EventData3*uint32 IANA*" ;  RecordData = "*100*100.0.32*25 0

2020-03-14T12:11:25Z sfcb-vmware_base[67999]: *2*147 179 61 94*32 0*4*27*100*100*false*111*1*255*255*1*" ;  MessageTimestamp = "20200207185931.000000+000" ;  LogName = "IPMI SEL" ;  LogCreationClassName = "OMC_IpmiRecordLog" ;  Description = "Assert + Cable/Interconnect Config Error" ;  CreationClassName = "OMC_IpmiLogRecord" ; }

2020-03-14T12:12:06Z snmpd: load_lags: no port group configs found

2020-03-14T12:12:06Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:12:06Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:12:06Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:13:21Z snmpd: load_lags: no port group configs found

2020-03-14T12:13:21Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:13:21Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:13:21Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:13:25Z sfcb-vmware_base[67999]: updateHealthStateFromSel(637) Invalid "RecordData" in Instance of root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25" {  PATH: root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25"  RecordID = "25" ;  RecordFormat = "*string CIM_Sensor.DeviceID*uint8[2] IPMI_RecordID*uint8 IPMI_RecordType*uint8[4] IPMI_Timestamp*uint8[2] IPMI_GeneratorID*uint8 IPMI_EvMRev*uint8 IPMI_SensorType*uint8 IPMI_SensorNumber*boolean IPMI_AssertionEvent*uint8 IPMI_EventType*uint8 IPMI_EventData1*uint8 IPMI_EventData2*uint8 IPMI_EventData3*uint32 IANA*" ;  RecordData = "*100*100.0.32*25 0

2020-03-14T12:13:25Z sfcb-vmware_base[67999]: *2*147 179 61 94*32 0*4*27*100*100*false*111*1*255*255*1*" ;  MessageTimestamp = "20200207185931.000000+000" ;  LogName = "IPMI SEL" ;  LogCreationClassName = "OMC_IpmiRecordLog" ;  Description = "Assert + Cable/Interconnect Config Error" ;  CreationClassName = "OMC_IpmiLogRecord" ; }

2020-03-14T12:14:36Z snmpd: load_lags: no port group configs found

2020-03-14T12:14:36Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:14:36Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:14:36Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:15:00Z crond[66574]: USER root pid 661641 cmd /bin/hostd-probe.sh ++group=host/vim/vmvisor/hostd-probe/stats/sh

2020-03-14T12:15:01Z syslog[661644]: starting hostd probing.

2020-03-14T12:15:26Z sfcb-vmware_base[67999]: updateHealthStateFromSel(637) Invalid "RecordData" in Instance of root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25" {  PATH: root/cimv2:OMC_IpmiLogRecord.CreationClassName="OMC_IpmiLogRecord",LogCreationClassName="OMC_IpmiRecordLog",LogName="IPMI SEL",MessageTimestamp="20200207185931.000000+000",RecordID="25"  RecordID = "25" ;  RecordFormat = "*string CIM_Sensor.DeviceID*uint8[2] IPMI_RecordID*uint8 IPMI_RecordType*uint8[4] IPMI_Timestamp*uint8[2] IPMI_GeneratorID*uint8 IPMI_EvMRev*uint8 IPMI_SensorType*uint8 IPMI_SensorNumber*boolean IPMI_AssertionEvent*uint8 IPMI_EventType*uint8 IPMI_EventData1*uint8 IPMI_EventData2*uint8 IPMI_EventData3*uint32 IANA*" ;  RecordData = "*100*100.0.32*25 0

2020-03-14T12:15:26Z sfcb-vmware_base[67999]: *2*147 179 61 94*32 0*4*27*100*100*false*111*1*255*255*1*" ;  MessageTimestamp = "20200207185931.000000+000" ;  LogName = "IPMI SEL" ;  LogCreationClassName = "OMC_IpmiRecordLog" ;  Description = "Assert + Cable/Interconnect Config Error" ;  CreationClassName = "OMC_IpmiLogRecord" ; }

2020-03-14T12:15:51Z snmpd: load_lags: no port group configs found

2020-03-14T12:15:51Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:15:51Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

2020-03-14T12:15:51Z snmpd: lookup_vswitch: fetch VSI_MODULE_NODE_PortCfgs_properties failed Not found

 

Nothing else relevant in other log files...

 

vmksummary.log

 

2020-03-13T07:00:00Z heartbeat: up 34d11h34m35s, 10 VMs; [[242166 vmx 8015872kB] [129039 vmx 8384748kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T08:00:01Z heartbeat: up 34d12h34m35s, 10 VMs; [[242166 vmx 8017920kB] [129039 vmx 8384944kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T09:00:01Z heartbeat: up 34d13h34m35s, 10 VMs; [[242166 vmx 8017920kB] [129039 vmx 8385800kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T10:00:01Z heartbeat: up 34d14h34m35s, 10 VMs; [[242166 vmx 8007680kB] [129039 vmx 8385032kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T11:00:01Z heartbeat: up 34d15h34m36s, 10 VMs; [[242166 vmx 8017920kB] [129039 vmx 8384744kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T12:00:01Z heartbeat: up 34d16h34m36s, 10 VMs; [[242166 vmx 8028160kB] [129039 vmx 8385772kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T13:00:00Z heartbeat: up 34d17h34m35s, 10 VMs; [[242166 vmx 8028156kB] [129039 vmx 8384668kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T14:00:01Z heartbeat: up 34d18h34m35s, 10 VMs; [[242166 vmx 8028136kB] [129039 vmx 8384736kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T15:00:00Z heartbeat: up 34d19h34m35s, 10 VMs; [[242166 vmx 8028152kB] [129039 vmx 8385832kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T16:00:01Z heartbeat: up 34d20h34m35s, 10 VMs; [[242166 vmx 8028152kB] [129039 vmx 8385464kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T17:00:01Z heartbeat: up 34d21h34m35s, 10 VMs; [[242166 vmx 8030208kB] [129039 vmx 8385024kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T18:00:01Z heartbeat: up 34d22h34m36s, 10 VMs; [[242166 vmx 8036328kB] [129039 vmx 8386020kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T19:00:00Z heartbeat: up 34d23h34m35s, 10 VMs; [[242166 vmx 8036352kB] [129039 vmx 8385780kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-13T20:00:01Z heartbeat: up 35d0h34m35s, 10 VMs; [[242166 vmx 8036352kB] [129039 vmx 8385976kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-13T21:00:00Z heartbeat: up 35d1h34m35s, 10 VMs; [[242166 vmx 8036352kB] [129039 vmx 8386060kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-13T22:00:01Z heartbeat: up 35d2h34m35s, 10 VMs; [[242166 vmx 8036332kB] [129039 vmx 8384328kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-13T23:00:01Z heartbeat: up 35d3h34m35s, 10 VMs; [[242166 vmx 8046564kB] [129039 vmx 8385820kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T00:00:01Z heartbeat: up 35d4h34m35s, 10 VMs; [[242166 vmx 8044528kB] [129039 vmx 8385240kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T01:00:00Z heartbeat: up 35d5h34m35s, 10 VMs; [[242166 vmx 8044528kB] [129039 vmx 8385908kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T02:00:00Z heartbeat: up 35d6h34m35s, 10 VMs; [[242166 vmx 8044512kB] [129039 vmx 8385984kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T03:00:01Z heartbeat: up 35d7h34m35s, 10 VMs; [[242166 vmx 8046576kB] [129039 vmx 8376440kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T04:00:01Z heartbeat: up 35d8h34m35s, 10 VMs; [[242166 vmx 8050688kB] [129039 vmx 8375332kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T05:00:01Z heartbeat: up 35d9h34m35s, 10 VMs; [[242166 vmx 8050688kB] [129039 vmx 8367740kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T06:00:01Z heartbeat: up 35d10h34m35s, 10 VMs; [[242166 vmx 8062968kB] [129039 vmx 8376736kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-14T07:00:01Z heartbeat: up 35d11h34m36s, 10 VMs; [[242166 vmx 7946236kB] [129039 vmx 8375944kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-14T08:00:00Z heartbeat: up 35d12h34m35s, 10 VMs; [[242166 vmx 7946240kB] [129039 vmx 8375820kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-14T09:00:01Z heartbeat: up 35d13h34m35s, 10 VMs; [[242166 vmx 8062968kB] [129039 vmx 8376704kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-14T10:00:00Z heartbeat: up 35d14h34m35s, 10 VMs; [[242166 vmx 8062968kB] [129039 vmx 8375808kB] [121198 vmx 8388608kB]] [[242166 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]       

2020-03-14T11:00:01Z heartbeat: up 35d15h34m35s, 8 VMs; [[517902 vmx 6832408kB] [129039 vmx 8374600kB] [121198 vmx 8388608kB]] [[517902 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]        

2020-03-14T12:00:01Z heartbeat: up 35d16h34m35s, 7 VMs; [[517902 vmx 6826876kB] [129039 vmx 8377732kB] [121198 vmx 8388608kB]] [[517902 vmx 0%max] [129039 vmx 0%max] [121198 vmx 0%max]]

 

vmkeventd.log

 

Nothing new from esxi boot date.

 

2020-02-07T19:14:21Z vmkeventd[66045]: DICT  featMask.evc.cpuid.MWAIT = "Val:1"

2020-02-07T19:14:21Z vmkeventd[66045]: DICT     featMask.evc.cpuid.NX = "Val:1"

2020-02-07T19:14:21Z vmkeventd[66045]: DICT     featMask.evc.cpuid.SS = "Val:1"

2020-02-07T19:14:21Z vmkeventd[66045]: DICT   featMask.evc.cpuid.SSE3 = "Val:1"

2020-02-07T19:14:21Z vmkeventd[66045]: DICT  featMask.evc.cpuid.SSSE3 = "Val:1"

2020-02-07T19:14:21Z vmkeventd[66045]: DICT --- SITE DEFAULTS /usr/lib/vmware/config

2020-02-07T19:14:21Z vmkeventd[66045]: vmkmod: /vmfs is a not a regular file

2020-02-07T19:14:21Z vmkeventd[66045]: VMKMod_ComputeModPath failed for module vmfs: "Bad parameter" (bad0007)

2020-02-07T19:15:04Z mark: storage-path-claim-completed

2020-02-07T19:25:34Z vmkeventd[66047]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.

2020-02-07T19:25:34Z vmkeventd[66047]: DICT --- GLOBAL SETTINGS /usr/lib/vmware/settings

2020-02-07T19:25:34Z vmkeventd[66047]: DICT --- NON PERSISTENT (null)

2020-02-07T19:25:34Z vmkeventd[66047]: DICT --- HOST DEFAULTS /etc/vmware/config

2020-02-07T19:25:34Z vmkeventd[66047]: DICT                    libdir = "/usr/lib/vmware"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT           authd.proxy.nfc = "vmware-hostd:ha-nfc"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT        authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT   authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT      authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT            authd.fullpath = "/sbin/authd"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featureCompat.evc.completeMasks = "TRUE"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT  featMask.evc.cpuid.Intel = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.FAMILY = "Val:6"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT  featMask.evc.cpuid.MODEL = "Val:0xf"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.STEPPING = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.NUMLEVELS = "Val:0xa"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.NUM_EXT_LEVELS = "Val:0x80000008"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.CMPXCHG16B = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT     featMask.evc.cpuid.DS = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT featMask.evc.cpuid.LAHF64 = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT     featMask.evc.cpuid.LM = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT  featMask.evc.cpuid.MWAIT = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT     featMask.evc.cpuid.NX = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT     featMask.evc.cpuid.SS = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT   featMask.evc.cpuid.SSE3 = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT  featMask.evc.cpuid.SSSE3 = "Val:1"

2020-02-07T19:25:34Z vmkeventd[66047]: DICT --- SITE DEFAULTS /usr/lib/vmware/config

2020-02-07T19:25:34Z vmkeventd[66047]: vmkmod: /vmfs is a not a regular file

2020-02-07T19:25:34Z vmkeventd[66047]: VMKMod_ComputeModPath failed for module vmfs: "Bad parameter" (bad0007)

2020-02-07T19:26:17Z mark: storage-path-claim-completed

 

In january we installed the hypervisor OS in the pendrive again, previously making a config backup and restoring in the hypervisor again. Same problem.

 

It's clearly a hypervisor pendrive problem, or it could be other thing?

 

We don't know what to do now...

 

Thanks a lot.

wmware ESXi 6.x Failed Port Forwarding

$
0
0

Please help me i trying alot but failed to access my VM through public IP, where i am wrong please guide..

This is my ESXi machine on local Ip 192.168.1.150

2020-03-14-16-06-15.png

 

VM machine with IP configurations mentioned in image

 

2020-03-14-16-06-58.png

 

Jenkins Installed on port 8080

2020-03-14-16-07-30.png

Port forwarding / Port Mapping settings on Vodafone Router

2020-03-14-16-08-55.png

This is my Public IP same ping OK

2020-03-14-16-09-47.png

 

Result is this, Please guide where i am wrong...

2020-03-14-16-10-45.png

VMNIC sequence numbers relative to Cisco UCS VNIC sequence numbers

$
0
0

I work mostly on the Cisco UCS side of a UCS-VMWare environment. I recently added

a chassis and blades and the VMW admins added the ESX bit. But one thing is not

lining up that I hope someone can explain. In the past the UCS blade VNIC ordering

aligned with the sequence of VMNICs in vSphere.

 

What causes that mapping of VMNIC mac address ordering? Is that the VMW admin

adding the VMNICs to the host without regarding to the incrementing order of MAC addresses?

Or does that just happen automagically? Thank you for helping me to clarify this mystery.

It's causing some thrash in getting the correct VLANs and right FI redundancy to each

host.

 

Preferred VNIC to VMNIC alignment of sequence numbers..

But now the UCS vNIC versus VMNICs in vSphere are jumbled up..

Unable to add datastore

$
0
0

I just acquired 2 x IBM System x3650 M4 MT 7915 for my lab, and installed VMWare ESXi 5.5 U3 (Lenovo Custom Image).

 

This system has an onboard RAID card - the M511e. I created a 4-disk RAID5 and tried to set up a VMFS5 datastore, but it wouldn't let me. I thought I was hitting that issue where ESXi can't handle creating a new vmfs partition because of an existing partition table on the array, so I rebooted the server, removed the RAID5 array and created a new RAID0 array with "fast init" (which wipes the first 100 MB of the disk), but ESXi still cannot format this new volume and setup a  VMFS volume.

 

The vSphere client shows

 

Call "HostDatastoreSystem.CreateVmfsDatastore" for object "datastoreSystem-68145" on vCenter Server "vc01" failed.

Operation failed, diagnostics report:  Unable to create Filesystem, please see VMkernel log for more details: No such device

 

I've done a bunch of reading on this but not have been successfully able to create a new volume. I have also tried using partedUtil to purge the partition table from the volume but that has not helped either.

 

Also thought I'd add - I booted the gparted Live CD and was able to manipulate the disk / partition table etc so I don't think this is a hardware issue... but rather something with ESXi.

 

Ourtput of partedutil

 

# partedUtil getptbl /vmfs/devices/disks/naa.600507605818f480260043b32a278766
gpt
291296 255 63 4679680000
1 2048 4679679966 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

 

Anyone know what could be causing this?

 

Here is the relevant section from the vmkernel.log:

 

2020-03-15T02:42:58.033Z cpu26:34251 opID=3ddc49d5)World: 14302: VC opID 7977CB38-0000076E-be-f2 maps to vmkernel opID 3ddc49d5
2020-03-15T02:42:58.033Z cpu26:34251 opID=3ddc49d5)Vol3: 731: Couldn't read volume header from control: Not supported
2020-03-15T02:42:58.033Z cpu26:34251 opID=3ddc49d5)Vol3: 731: Couldn't read volume header from control: Not supported
2020-03-15T02:42:58.033Z cpu26:34251 opID=3ddc49d5)FSS: 5099: No FS driver claimed device 'control': Not supported
2020-03-15T02:42:58.081Z cpu26:34251 opID=3ddc49d5)VC: 2052: Device rescan time 30 msec (total number of devices 10)
2020-03-15T02:42:58.081Z cpu26:34251 opID=3ddc49d5)VC: 2055: Filesystem probe time 122 msec (devices probed 9 of 10)
2020-03-15T02:42:58.081Z cpu26:34251 opID=3ddc49d5)VC: 2057: Refresh open volume time 0 msec
2020-03-15T02:42:58.265Z cpu26:34251 opID=3ddc49d5)Vol3: 731: Couldn't read volume header from control: Not supported
2020-03-15T02:42:58.265Z cpu26:34251 opID=3ddc49d5)Vol3: 731: Couldn't read volume header from control: Not supported
2020-03-15T02:42:58.265Z cpu26:34251 opID=3ddc49d5)FSS: 5099: No FS driver claimed device 'control': Not supported
2020-03-15T02:42:58.317Z cpu26:34251 opID=3ddc49d5)VC: 2052: Device rescan time 32 msec (total number of devices 10)
2020-03-15T02:42:58.317Z cpu26:34251 opID=3ddc49d5)VC: 2055: Filesystem probe time 117 msec (devices probed 9 of 10)
2020-03-15T02:42:58.317Z cpu26:34251 opID=3ddc49d5)VC: 2057: Refresh open volume time 0 msec
2020-03-15T02:42:58.319Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.319Z cpu39:33141)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.600507605818f480260043b32a278766" state in doubt; requested fast path state update...
2020-03-15T02:42:58.319Z cpu39:33141)ScsiDeviceIO: 2363: Cmd(0x4136803f9880) 0x28, CmdSN 0x4 from world 34251 to dev "naa.600507605818f480260043b32a278766" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2020-03-15T02:42:58.323Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.328Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.330Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.330Z cpu39:33141)ScsiDeviceIO: 2363: Cmd(0x4136803f9240) 0x28, CmdSN 0x4 from world 34251 to dev "naa.600507605818f480260043b32a278766" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2020-03-15T02:42:58.335Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.339Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.355Z cpu26:34251 opID=3ddc49d5)LVM: 7580: Initialized naa.600507605818f480260043b32a278766:1, devID 5e6d9632-be877c91-557b-a0369f44affc
2020-03-15T02:42:58.356Z cpu26:34251 opID=3ddc49d5)LVM: 7668: Zero volumeSize specified: using available space (2395977268736).
2020-03-15T02:42:58.857Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.857Z cpu46:33196)ScsiDeviceIO: 2363: Cmd(0x413686012f00) 0x28, CmdSN 0x3 from world 34251 to dev "naa.600507605818f480260043b32a278766" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2020-03-15T02:42:58.862Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.862Z cpu46:33196)NMP: nmp_ThrottleLogForDevice:2349: Cmd 0x28 (0x413686012f00, 34251) to dev "naa.600507605818f480260043b32a278766" on path "vmhba0:C2:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL
2020-03-15T02:42:58.866Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:42:58.866Z cpu40:34251 opID=3ddc49d5)LVM: 4258: PE grafting failed for device naa.600507605818f480260043b32a278766:1, vol 5e6d9632-bc1bc1f1-6c3d-a0369f44affc/8244582585387261033: Storage initiator error
2020-03-15T02:43:01.358Z cpu26:32931)LVM: 13197: One or more LVM devices have been discovered.
2020-03-15T02:43:01.360Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:43:01.361Z cpu14:33164)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.600507605818f480260043b32a278766" state in doubt; requested fast path state update...
2020-03-15T02:43:01.361Z cpu14:33164)ScsiDeviceIO: 2363: Cmd(0x412e83174bc0) 0x28, CmdSN 0x4 from world 34252 to dev "naa.600507605818f480260043b32a278766" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2020-03-15T02:43:01.365Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e
2020-03-15T02:43:01.370Z cpu45:33712)<7>megasas: TGT : 0, FW status 0x7e

Enabling onboard LSI SAS 2008 PCI passthrough crashes guest Win 2012 R2 vm

$
0
0

ESXi version: 5.5u3
Motherboard: SuperMicro X10SRH-CF BIOS v 3.2 (latest)
OS VMs - WIndows 2012R2, with all mandatory updates installed

 

I'm using this ancient version of ESXi because it's the last one that SuperMicro asserts supports this motherboard.

 

Windows 2012R2 can successfully access various PCI passthrough controllers in PCI slots, but when I enable the onboard SAS 3008 controller, it won't boot.

 

But a Debian 10 vm starts fine and can talk to the 3008 without issue. So this is clearly related to Windows.

 

I'm a ESXi n00b. How do I go about debugging this? Which logs should I examine?

[Auto-Deploy] How ESXi reboot

$
0
0

Hello Gurus,

 

If I disconnected my Boot device from ESXi and rebooted the host it will be rebooted as the System loaded to the memory.

Fine, then if I have ESXi booted via  Stateless Auto Deploy rebooted and the vCenter isn't available ESXi will not boot, isn't it loaded to the memory ?


ESXi 6.7 (and ESXi 7 ?) passthrough for Quadro RTX 4000 (and P2000) for local workstation use?

$
0
0

I am tired of having to mess with VM options in order to get nVidia cards to passthrough.

(System is based on Supermicro X11DPG-QT with Xeon Silver so that's not the issue.)

I had better luck with AMD and am using RX550 with Windows 10 Pro and RX480 with Ubuntu 18.04.

 

I want to buy a single slot Quadro card that is supported for passthrough in ESXi 6.7 and will be supported in the upcoming ESXi 7

(vDGA - one card, one VM / not the vGPU sharing thing - GRID)

The Quadro P2000 is cheaper (and good enough for my needs) but support might be dropped with ESXi 7.

The Quadro RTX 4000 is more than double the price and there is no mention of vDGA support in any ESXi version. The only info my searching could find was that someone was able to passthrough both RTX 6000 and RTX 8000 without any issues.

 

Any advise?

 

Thanks!

Unable to access Webgui/SSH for one of the ESXi Host.

$
0
0

Hi Gurus,

 

I have a 2 node ESXi Host Cluster managed by vcenter. I am running ESXi 6.5 U2. Both the ESXi is managed by vcenter and both the ESXi is showing as green in the vcenter console. Both the ESXi host has a similar configuration. When I try to access one of the ESXi hosts using WebGUI/SSH it is not accessible, however, the host is pingable. The Lockdown is disabled on both the host.

     I also tried to restart the SSH, ESXi SHell, DCUI Daemon from the Vcenter console on the ESXi host having the issue but still, the issue is not resolved.

 

Any suggestion or guidance by the Gurus is highly appreciated.

 

Thanks

 

AJ

Upgrading for ESXi 6.5 to 6.7 - PSOD #PF Exception 14 in world 2097594: vmkdevmgr IP 0x41800f2ff2c9

$
0
0

Hi...

Purple screen of death. #PF Exception 14 in world 2097594

I've an old server withe a Intel S5500BC Mother board. I've migrated the VM's to a new server, but I want the old server as a Backup server. I has been running without any problem until now, with the ESXi 6.5 Release Build 5969303. I want the old server to be upgraded to same version as new server, so we can move VM's to it when we need to maintenance our new system.  But if I'm trying to upgrade to anything newer than ESXi 6.5 Release Build 5969303. I'm getting a Purple Screen Of Death. I've updated both Bios and the raid controller FW to latest version. But still the same problem.

 

Is there anyone there can interpret this PSOD and tell me the exact problem and maybe a way to fix this issue?

I've seen someone with similar issue on the internet, but no answer.

 

This is the PSOD I'm getting when upgrading to version 6.7.U3b:

 

This version is running without problems.

Moving SYSLOG.GLOBAL.LOGDIR failing

$
0
0

VMware 6.7 ESXI 6.7

 

Trying to move my Syslog.global.logDir  to another Datastore

 

I have it currently located on an internal SSD drive I need the SSD drive to build my VSAN datastore

 

So I  have a USB Datastore and NFS datastores.

 

But when I select the new datastore I get this error.     A general system error occurred: Internal error

 

 

I tried using esxcli coomands

 

esxcli system syslog config set --logdir=/TGCSESXI-17-USB-Datastore/Logfiles

Logdir must exist and be a directory.

 

Both NFS and USB datastore give the same error

 

Currently in the GUI advanced settings syslog

[] /scratch/log

 

 

LSI_SAS driver

$
0
0

I have to update the LSI_SAS driver because of a problem in the virtual machine. ı cant driver for windows 2012. how do I find the latest driver? ESXi 6.5 u1 and os is windows 2012 .

this device driver comes in windows or if vmware tools is installed ?

Viewing all 8313 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>