Quantcast
Channel: VMware Communities : All Content - ESXi
Viewing all 8313 articles
Browse latest View live

Resolved: Physical disk {} does not have the inquiry data (SCSI page 83h VPD descriptor) that is required by failover clustering -- With Server 2016.

$
0
0

Hi,

 

I received this Error/Failure during Cluster Validation.  I googled for a solution and was not able to find one so since I have resolved this I wanted to post the solution.

Physical disk {} does not have the inquiry data (SCSI page 83h VPD descriptor) that is required by failover clustering

 

 

1.  We have VMs that were created from Template.  (They may not have been Sys-Preped) which may cause us other issues down the road.

2.  One of the disks was a GPT disk.  (This was the GUID the Cluster Validation was complaining about.)

3.  This is Server 2016 running on VMware 6.5

 

I did the following to resolve the issue:

1.  Removed the GPT Disk.

2.  Added another disk.

 

Re-ran the cluster Validation and that resolved the issue.

Thanks,

Carl


My ESXI 6.0 7967664 randomly reboots

$
0
0

Hi All,

 

Can someone help me check why is my host randomly reboots? I'm attaching some logs and hopefully someone can point me to the right direction. I'm running a Dell PowerEdge R730 server.

 

Thanks in advance.

iSCSI storage unusable for a host - iSCSI Adapter interface path status displays "not used"

$
0
0

Hi everyone - so Im running vmware vcenter 6...  I have a host that will no longer connect to the iscsi NAS that we have.  Others are working beautifully but this one is determined to be aggravating.  Also this host has connected correctly once before then I added a secondary network adapter into the mix for redundancy (as i did on other hosts) - then from there it disconnected from the NAS never to work again.  The iSCSI software adapter in VMWare shows for the one adapter that I have 'bound' to the software adapter - that the physical interface is connected and online...but the 'path status' column shows 'not used' no matter what I've tried to do.  Thanks so much in advance for any help that can be sent my way!

Windows Server 2012 R2 BSOD 0x0000013c

$
0
0

Hello,

 

One of my Windows 2012 R2 RDS servers crashes with a BSOD INVALID_IO_BOOST_STATE (code: 0x0000013c). Some searching told me that it should be a driver with a memory leak.

Guest: Windows Server 2012 R2 up to date, hardware version 11, LSI Logic Storage Controller, E1000E NIC, Host: Dell PowerEdge R720, ESXi 6.0.0

This is the only server with this issue but this is our sole RDS server in our VMWare environment. Do you have any suggestions how to fix?

 

When I run the analysis on the minidump, it says the issue is with msoia.exe which is the Microsoft Office Telemetry Agent but when I submitted a Microsoft Office 365 ticket, they referred me to Windows Server (commercial) support. They referred me back to Microsoft Office 365. I've attached my minidump. Should I post my full dump? It's more than a gigabyte in size so I'm holding off unless it's really neaded.

 

Thanks in advance!

 

Regards,

 

Alan Chan

VMware ESXi 6.7: Cannot Download Kickstart Config File After iPXE Boot

$
0
0

Hello all --

 

We have sucessfully gotten the VMware ESXi 6.7 installer to boot from a HTTP server using iPXE (and automatically enable the serial console), but we are running into problems getting the fully booted installer to download and use the supplied kickstart config file. We have spent hours trying every possible combination of the boot options documented here, and we have tried supplying those boot options as arguments to the iPXE 'kernel' command as well as in the 'kernelopt' boot.cfg option line.

 

No matter what we do, we receive the following error from the installer (our actual HTTP server hostname is replaced below and throughout this post with serverhostname).

 

Error (see log for more info):

Error copying / downloading file

File (http://serverhostname/packetnet-ipx

e/esxi/6.7/ks.cfg) connection failed. Made 15 attempts.

 

We know the network adapter is working and is able to reach the HTTP server, because in order for the installer to get to the point of displaying this message, many of the installation image files had to be downloaded from the exact same HTTP server that hosts the ks.cfg file. We have observed in several examples that some people have referred to the HTTP server hosting the ks.cfg file by it's IP address instead of it's hostname, but our web server requires a hostname to be supplied, and so this is simply not an option for us.

 

Here is the latest version of our iPXE script:

 

#!ipxe

 

kernel mboot.c32 -c http://serverhostname/packetnet-ipxe/esxi/6.7/boot.cfg ip=${net0/ip} netmask=${net0/netmask} gateway=${net0/gateway} nameserver=8.8.8.8

boot

 

And our boot.cfg:

 

bootstate=0

title=Loading ESXi installer

timeout=5

prefix=http://serverhostname/packetnet-ipxe/esxi/6.7/iso

kernel=b.b00

kernelopt=runweasel text nofb com2_baud=115200 com2_Port=0x2f8 tty2Port=com2 gdbPort=none logPort=none ks=http://serverhostname/packetnet-ipxe/esxi/6.7/ks.cfg

modules=jumpstrt.gz --- useropts.gz --- features.gz --- k.b00 --- chardevs.b00 --- user.b00 --- procfs.b00 --- uc_intel.b00 --- uc_amd.b00 --- vmx.v00 --- vim.v00 --- sb.v00 --- s.v00 --- ata_liba.v00 --- ata_pata.v00 --- ata_pata.v01 --- ata_pata.v02 --- ata_pata.v03 --- ata_pata.v04 --- ata_pata.v05 --- ata_pata.v06 --- ata_pata.v07 --- block_cc.v00 --- bnxtnet.v00 --- brcmfcoe.v00 --- char_ran.v00 --- ehci_ehc.v00 --- elxiscsi.v00 --- elxnet.v00 --- hid_hid.v00 --- i40en.v00 --- iavmd.v00 --- igbn.v00 --- ima_qla4.v00 --- ipmi_ipm.v00 --- ipmi_ipm.v01 --- ipmi_ipm.v02 --- iser.v00 --- ixgben.v00 --- lpfc.v00 --- lpnic.v00 --- lsi_mr3.v00 --- lsi_msgp.v00 --- lsi_msgp.v01 --- lsi_msgp.v02 --- misc_cni.v00 --- misc_dri.v00 --- mtip32xx.v00 --- ne1000.v00 --- nenic.v00 --- net_bnx2.v00 --- net_bnx2.v01 --- net_cdc_.v00 --- net_cnic.v00 --- net_e100.v00 --- net_e100.v01 --- net_enic.v00 --- net_fcoe.v00 --- net_forc.v00 --- net_igb.v00 --- net_ixgb.v00 --- net_libf.v00 --- net_mlx4.v00 --- net_mlx4.v01 --- net_nx_n.v00 --- net_tg3.v00 --- net_usbn.v00 --- net_vmxn.v00 --- nhpsa.v00 --- nmlx4_co.v00 --- nmlx4_en.v00 --- nmlx4_rd.v00 --- nmlx5_co.v00 --- nmlx5_rd.v00 --- ntg3.v00 --- nvme.v00 --- nvmxnet3.v00 --- nvmxnet3.v01 --- ohci_usb.v00 --- pvscsi.v00 --- qcnic.v00 --- qedentv.v00 --- qfle3.v00 --- qfle3f.v00 --- qfle3i.v00 --- qflge.v00 --- sata_ahc.v00 --- sata_ata.v00 --- sata_sat.v00 --- sata_sat.v01 --- sata_sat.v02 --- sata_sat.v03 --- sata_sat.v04 --- scsi_aac.v00 --- scsi_adp.v00 --- scsi_aic.v00 --- scsi_bnx.v00 --- scsi_bnx.v01 --- scsi_fni.v00 --- scsi_hps.v00 --- scsi_ips.v00 --- scsi_isc.v00 --- scsi_lib.v00 --- scsi_meg.v00 --- scsi_meg.v01 --- scsi_meg.v02 --- scsi_mpt.v00 --- scsi_mpt.v01 --- scsi_mpt.v02 --- scsi_qla.v00 --- shim_isc.v00 --- shim_isc.v01 --- shim_lib.v00 --- shim_lib.v01 --- shim_lib.v02 --- shim_lib.v03 --- shim_lib.v04 --- shim_lib.v05 --- shim_vmk.v00 --- shim_vmk.v01 --- shim_vmk.v02 --- smartpqi.v00 --- uhci_usb.v00 --- usb_stor.v00 --- usbcore_.v00 --- vmkata.v00 --- vmkfcoe.v00 --- vmkplexe.v00 --- vmkusb.v00 --- vmw_ahci.v00 --- xhci_xhc.v00 --- elx_esx_.v00 --- btldr.t00 --- weaselin.t00 --- esx_dvfi.v00 --- esx_ui.v00 --- lsu_hp_h.v00 --- lsu_lsi_.v00 --- lsu_lsi_.v01 --- lsu_lsi_.v02 --- lsu_lsi_.v03 --- native_m.v00 --- qlnative.v00 --- rste.v00 --- vmware_e.v00 --- vsan.v00 --- vsanheal.v00 --- vsanmgmt.v00 --- tools.t00 --- xorg.v00 --- imgdb.tgz --- imgpayld.tgz

build=

updated=0

 

And our ks.cfg:

 

vmaccepteula

rootpw !Default0

install --firstdisk

reboot

 

Anyone have any ideas?

 

Thanks!

 

Joe

connect esxi 6.7 via VMware vSphere Client

$
0
0

hi

I want to connect esxi 6.7 with "VMware vSphere Client" not with browser

 

thanks in advance

Slower disk read rate after upgrading hardware to VM Version 11

$
0
0

Hi,

 

We have recently upgraded our VSphere from 5.5 to 6.5 and migrated all the VMs. I noticed after upgrading vm hardware to version 11 or above we get slower disk read rate on all the VMs.

As a test I created a new VM with hardware version 8 or 9 and ran a disk speed test and I got 307 MB/s read rate. I upgrade the VM to version 11 or above I got 228 MB/s.

I have compared the hardware settings but didn't find any difference between version 8 and 11. Any ideas?????

 

Thanks

 

Compare results.jpg

 

Vcenter compare.jpg

Unable to connect to my ESXI host static ip

$
0
0

Hi all, so I have ESXi installed on a PC and it is connected by ethernet to my router. I assigned the ESXi host a static ip (192.168.1.22), subnet is 255.255.255.0, default gateway is 0.0.0.0.

 

To set DNS I did ipconfig /all on my laptop (which is connected wirelessly to the same network). I found the dns servers and set them in the esxi host.

 

However, when I try to connect to 192.168.1.22, I am unable to connect.

 

Can anyone help? Thank you.


ESXi 6.5 - How to make second Port Group available to add to Virtual Machine Network Adapter?

$
0
0

I'm new to VMWare/vSphere/ESXi so struggling getting this initial configuration complete. I have installed ESXi 6.5, and have been able to complete most of the configuration on the host. I installed a new Virtual Machine from an OVA file, and that went well.

 

Now, we have two networks. A management network, and a data network. These are two completely isolated networks. I was able to configure the Host to connect to the two networks, and I created a new Port Group/vSwitch/VLAN on the Host. I want to add that Network to the VM, but then I edit the Virtual Machine and add the Network Adapter and attempt to assign the second Port Group, it is not available in the drop down list. How to I make that available to assign the the 3rd Network Adapter.

 

Virtual_Machine_Network_Port_Group_Not_Available.JPG

 

Host_Port_Groups.JPG

 

vSwitch1.JPG

Internal NIC clashing with Expansion

$
0
0

Hi There,

 

I've got a problem with a add on NIC (using SFP's) is clashing with the internal NIC card, so when the expansion card is plugged in ESXi doesn't see the SPF ports on either but when it isn't plugged in it sees the internal one ok and if i disable the SPF ports on the internal card it then sees the ports on the expansion card ok. Im thinking it might be because they are using the same driver? (the expansion card uses the same driver as the internal card) Its a new Dell PowerEdge 740 server and the expansion card came with it and shows up fine in BIOS, i have tried ESXi 6.5 and 6.7, i can see ESXi recognising both cards and showing all ports in the CLI but that doesn't filter through to the web console where they don't show. I have downloaded the latest drivers and installed and as mentioned it presents the internal SFP's if i take the expansion card out and i have discovered the expansion card works if i disable the SFP ports on the internal NIC. Any suggestions, I have checked and the NIC's are approved and use the Intel Ethernet Controller 82599, I have tried version 4.5.3 and 4.5.2. Any one else had these sorts of issues?

 

Cheers

How to import internal ADCS wildcard certificate to standalone host

$
0
0

Hi,

 

I have a single ESXi free standalone host, and I would like to replace the self signed cert with a wildcard cert generated by my internal ADCS CA.

Can anyone tell me in simple terms how to complete this?

 

Cheers

Eds

VMware 6.7 Issue with intel x540-t2 SR-IOV

$
0
0

We currently are running the latest versions of everything, including the 10.2.5 version of the tools.But the problem is still there unfortunately.

LAB - Shared Storage Advice for 2 Physical ESXi Hosts

$
0
0

I have had 2 physical ESXi boxes (HP Proliant Gen8) running 5.5 for a while now and would like to try and explorer the advantages that shared storage brings.  I have a spare HP N36L - would this be sufficient for me to use as a shared storage box?  Ideally I'd like to remove my hard drives from the 2 ESXi boxes and store them in the shared storage box.

 

What methods do you guys recommend that would improve my shared storage hands on knowledge, and give me the most 'corporate' like setup?  I've heard of FreeNAS, OpenFiler, Starwind etc but just not too sure which road to venture down.  iSCSI? NFS? Why one over the other?

 

Are there any caveats I should be aware of?

Question on ESXi upgrade with SuperMicro hardware

$
0
0

Hi Guys,

 

Supermicro hardware model name is X10DRH-C with Intel Xeon E5 2650 V4 ..... As per SuperMicro hardware OS compatibility, it will support up to ESXi 6.0.0 U2. If I want to upgrade to ESXi 6.5.0 U1,this hardware is not supported(as per supermicro article), however we could see the Intel Xeon E5 2650 V4 is supported for ESXi 6.5.0 upgrade in VMware compatibility guide but we could not able to see with Supermicro model name X10DRH-C in the same VMware hardware compatibility guide.

 

Please suggest whether it is compatible to upgrade or not?????

 

Thanks,

Manivel RR

Can I migrate from 6.5 to 6.7?

$
0
0

Now I'm working with Esxi 6.5.0 (4887370) and using Workstation 12.5.7 build-5813279.

I would like to migrate to the new version of 6.5.7, but I would not like to meet any problems later; for example being forced to upgrade also Workstation.

In other words, it must be a "zero cost" upgrade.

 

I wanted your opinion.

Thank you.


esxi 6.5 lsusb show usb device but host only show avocent mass storage function as available usb device

$
0
0

hi all!

i just upgrade from 5.5 to 6.5 and when i try to add my usb3 hd its not working ...

the only usb device available is this avocent mass storage function.. it was working fine on 5.5

any clue ?

regards

you see lsusb result and usbarbitrator is on

lsusb.JPG

 

avocent.JPG

Failed to power on virtual machine. Unable to enumerate all disks

$
0
0

This weekend the hard drive that ESXI 6.5 was loaded on it, died. I was unable to recover the configuration. So I got a usb stick and loaded esxi 6.7 on it. Added all of my vm's from the storage raid config, then I was able to boot all except the most important one "of course", I'm getting the error, "Failed to power on virtual machine Tech Connect. Unable to enumerate all disks. Ive tried googling and i have found nothing related to my error. Attached is a screen shot of the complete error message.

Screen Shot 2018-04-30 at 8.50.02 AM.png

Thank you.

 

Message was edited by: Austin Klenk

Is Pass through Required

$
0
0

I have a server with an i3-3220 processor.

I know the processor doesn't support VT-d and I am at this time I am unsure about the motherboard.

Before I go an invest  $$ into new equipment I am wondering if I need pass through or not.

 

I plan on setting up a Home Assistant VM and a MythTV VM.

The Home Assistant VM will need to access USB devices and the MYTHTV VM will need to access an PCI Tuner card and the serial port.

 

Will my VM's be able to access the USB, PCI card, and serial without pass trhough?

 

Thanks from a newbie.

Bootbank cannot be found at path '/bootbank' - ESX 6.0.0 6921384

$
0
0

Hi,

 

We've encountered issue twice in 2 weeks with one of our hosted server.

 

"Bootbank cannot be found at path '/bootbank'" was displayed and every VM were grayed out.

 

Once it happens, nothing more is logged.

 

The only solution is to restart the ESX.

 

Below, few log lines before the issue happens:

 

vmkernel.log

 

2018-04-10T23:04:49.964Z cpu9:36122)WARNING: CBT: 2080: Unsupported ioctl 61

2018-04-10T23:04:49.964Z cpu9:36122)WARNING: CBT: 2080: Unsupported ioctl 60

2018-04-10T23:04:49.964Z cpu9:36122)WARNING: CBT: 2080: Unsupported ioctl 44

2018-04-10T23:04:49.964Z cpu9:36122)VSCSI: 4047: handle 8823(vscsi0:1):Creating Virtual Device for world 36114 (FSS handle 26283523) numBlocks=346030080 (bs=512)

2018-04-10T23:04:49.964Z cpu9:36122)VSCSI: 273: handle 8823(vscsi0:1):Input values: res=0 limit=-2 bw=-1 Shares=1000

2018-04-10T23:04:49.964Z cpu9:36122)Vmxnet3: 15414: Disable Rx queuing; queue size 256 is larger than Vmxnet3RxQueueLimit limit of 128.

2018-04-10T23:04:49.964Z cpu9:36122)Vmxnet3: 15664: Using default queue delivery for vmxnet3 for port 0x200000b

2018-04-10T23:04:49.964Z cpu9:36122)NetPort: 1575: enabled port 0x200000b with mac 00:50:56:0a:c6:77

2018-04-10T23:04:50.044Z cpu0:34630)FSS: 6264: Failed to open file 'stark.vmdk'; Requested flags 0x40001, world: 34630 [hostd-worker], (Existing flags 0x8, world: 36123 [vmx-vcpu-1:stark]): Busy

2018-04-10T23:30:01.300Z cpu5:36114)ScsiDeviceIO: 2636: Cmd(0x43b5cdf81140) 0x1a, CmdSN 0x15b95 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-10T23:45:08.131Z cpu8:35869)WARNING: ScsiDeviceIO: 1243: Device naa.600605b00a093f001d12d2537f796f0f performance has deteriorated. I/O latency increased from average value of 3532 microseconds to 106428 microseconds.

2018-04-10T23:45:08.322Z cpu8:35869)WARNING: ScsiDeviceIO: 1243: Device naa.600605b00a093f001d12d2537f796f0f performance has deteriorated. I/O latency increased from average value of 3532 microseconds to 215223 microseconds.

2018-04-10T23:45:22.669Z cpu11:35870)ScsiDeviceIO: 1217: Device naa.600605b00a093f001d12d2537f796f0f performance has improved. I/O latency reduced from 215223 microseconds to 41912 microseconds.

2018-04-10T23:46:40.296Z cpu3:36071)ScsiDeviceIO: 1217: Device naa.600605b00a093f001d12d2537f796f0f performance has improved. I/O latency reduced from 41912 microseconds to 8113 microseconds.

2018-04-10T23:47:18.503Z cpu7:35361)ScsiDeviceIO: 1217: Device naa.600605b00a093f001d12d2537f796f0f performance has improved. I/O latency reduced from 8113 microseconds to 6887 microseconds.

2018-04-11T00:09:37.572Z cpu11:34569)ScsiDeviceIO: 2636: Cmd(0x43b5c0c53fc0) 0x1a, CmdSN 0x15bde from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T01:01:01.590Z cpu10:36073)ScsiDeviceIO: 2636: Cmd(0x43b5cf322d40) 0x1a, CmdSN 0x15c33 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T01:05:56.656Z cpu4:35939)VSCSI: 2590: handle 8236(vscsi0:0):Reset request on FSS handle 2624663 (0 outstanding commands) from (vmm0:tosca_20151005)

2018-04-11T01:05:56.656Z cpu9:32945)VSCSI: 2868: handle 8236(vscsi0:0):Reset [Retries: 0/0] from (vmm0:tosca_20151005)

2018-04-11T01:05:56.656Z cpu2:32945)VSCSI: 2661: handle 8236(vscsi0:0):Completing reset (0 outstanding commands)

2018-04-11T01:06:07.488Z cpu8:35937)VSCSI: 2590: handle 8236(vscsi0:0):Reset request on FSS handle 2624663 (0 outstanding commands) from (vmm0:tosca_20151005)

2018-04-11T01:06:07.488Z cpu2:32945)VSCSI: 2868: handle 8236(vscsi0:0):Reset [Retries: 0/0] from (vmm0:tosca_20151005)

2018-04-11T01:06:07.488Z cpu2:32945)VSCSI: 2661: handle 8236(vscsi0:0):Completing reset (0 outstanding commands)

2018-04-11T01:20:36.446Z cpu3:35871)VSCSI: 2590: handle 8224(vscsi0:0):Reset request on FSS handle 3148725 (0 outstanding commands) from (vmm0:CPV4_20151005)

2018-04-11T01:20:36.447Z cpu2:32945)VSCSI: 2868: handle 8224(vscsi0:0):Reset [Retries: 0/0] from (vmm0:CPV4_20151005)

2018-04-11T01:20:36.447Z cpu2:32945)VSCSI: 2661: handle 8224(vscsi0:0):Completing reset (0 outstanding commands)

2018-04-11T01:20:47.572Z cpu6:35867)VSCSI: 2590: handle 8224(vscsi0:0):Reset request on FSS handle 3148725 (0 outstanding commands) from (vmm0:CPV4_20151005)

2018-04-11T01:20:47.572Z cpu2:32945)VSCSI: 2868: handle 8224(vscsi0:0):Reset [Retries: 0/0] from (vmm0:CPV4_20151005)

2018-04-11T01:20:47.572Z cpu2:32945)VSCSI: 2661: handle 8224(vscsi0:0):Completing reset (0 outstanding commands)

2018-04-11T01:39:37.596Z cpu6:34569)ScsiDeviceIO: 2636: Cmd(0x43b5c0ba0800) 0x1a, CmdSN 0x15c7c from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T01:40:07.862Z cpu5:35867)WARNING: ScsiDeviceIO: 1243: Device naa.600605b00a093f001d12d2537f796f0f performance has deteriorated. I/O latency increased from average value of 3528 microseconds to 107444 microseconds.

2018-04-11T01:40:14.208Z cpu11:35716)ScsiDeviceIO: 1217: Device naa.600605b00a093f001d12d2537f796f0f performance has improved. I/O latency reduced from 107444 microseconds to 21401 microseconds.

2018-04-11T01:40:53.656Z cpu3:35554)ScsiDeviceIO: 1217: Device naa.600605b00a093f001d12d2537f796f0f performance has improved. I/O latency reduced from 21401 microseconds to 6909 microseconds.

2018-04-11T02:30:21.006Z cpu5:35735)ScsiDeviceIO: 2636: Cmd(0x43b5ce082c00) 0x1a, CmdSN 0x15cd1 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T03:09:37.604Z cpu11:35468)ScsiDeviceIO: 2636: Cmd(0x43b5c0bd3940) 0x1a, CmdSN 0x15d1a from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T04:01:01.557Z cpu8:35941)ScsiDeviceIO: 2636: Cmd(0x43b5c0c01480) 0x1a, CmdSN 0x15d6f from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T04:40:26.413Z cpu7:32789)ScsiDeviceIO: 2636: Cmd(0x43b5cdfa66c0) 0x1a, CmdSN 0x15db3 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T05:39:37.637Z cpu10:35512)ScsiDeviceIO: 2636: Cmd(0x43b5c0b35c40) 0x1a, CmdSN 0x15e0f from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T06:10:39.541Z cpu6:35973)ScsiDeviceIO: 2636: Cmd(0x43b5c0be3200) 0x1a, CmdSN 0x15e59 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T07:09:37.679Z cpu8:36072)ScsiDeviceIO: 2636: Cmd(0x43b5c0ba1b80) 0x1a, CmdSN 0x15eb5 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T07:50:42.305Z cpu4:34636)ScsiDeviceIO: 2636: Cmd(0x43b5c0b8d000) 0x1a, CmdSN 0x15ef9 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T08:39:37.695Z cpu8:34569)ScsiDeviceIO: 2636: Cmd(0x43b5c0c6dd80) 0x4d, CmdSN 0xe41 from world 34569 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2018-04-11T09:20:44.247Z cpu8:35870)ScsiDeviceIO: 2636: Cmd(0x43b5c1bb58c0) 0x1a, CmdSN 0x15f97 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T10:09:37.721Z cpu7:35869)ScsiDeviceIO: 2636: Cmd(0x43b5c0c1af00) 0x4d, CmdSN 0xe4d from world 34569 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2018-04-11T11:00:46.202Z cpu5:35869)ScsiDeviceIO: 2636: Cmd(0x43b5cf32b800) 0x1a, CmdSN 0x16037 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T11:39:37.745Z cpu11:36073)ScsiDeviceIO: 2636: Cmd(0x43b5cdf82ac0) 0x1a, CmdSN 0x16080 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T12:30:48.551Z cpu9:34217)ScsiDeviceIO: 2636: Cmd(0x43b5cdfa3240) 0x1a, CmdSN 0x160d5 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T13:09:37.769Z cpu6:32790)ScsiDeviceIO: 2636: Cmd(0x43b5c1bb34c0) 0x1a, CmdSN 0x1611e from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T14:01:01.554Z cpu8:35890)ScsiDeviceIO: 2636: Cmd(0x43b5cdc2aec0) 0x1a, CmdSN 0x16173 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T14:40:51.762Z cpu7:35890)ScsiDeviceIO: 2636: Cmd(0x43b5ceb34780) 0x1a, CmdSN 0x161b7 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T15:39:37.833Z cpu9:35416)ScsiDeviceIO: 2636: Cmd(0x43b5ceb32080) 0x1a, CmdSN 0x16213 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2018-04-11T16:10:53.686Z cpu5:35581)ScsiDeviceIO: 2636: Cmd(0x43b5cdc2ad40) 0x1a, CmdSN 0x16255 from world 0 to dev "naa.600605b00a093f001d12d2537f796f0f" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

 

We've made hardware test (Hard drive especially) but everything seems to be OK.

 

Thanks.

Passthru not working after 6.7 update

$
0
0

I was able to passthru my AMD gpu and a Renesas usb controller in 6.5 but after updating to 6.7 my Windows 10 vm startup just shows a black screen on the console. Normally, the VMWare logo displays and Windows boots up. Rolling back to 6.5 and passthru works again. Any ideas?

Viewing all 8313 articles
Browse latest View live