Quantcast
Channel: VMware Communities : All Content - ESXi
Viewing all 8313 articles
Browse latest View live

Guest Customization Windows 10 1803 starts but doesnt finish

$
0
0

I am building a new Windows 10 1803 template and using the same guest customization that I have used with previous builds of Windows 10. The vm is cloned from the template and applies sysprep successfully but the machine never reboots/renames or gets added to the domain.

 

I have rebuilt the Guest Customization settings from scratch and still get the same result.

 

Any ideas?


LSI IR writecaching performance fix

$
0
0

Hi,

 

WARNING: Enabling WC without BBU will cause data corruption when system is shutoff incorrectly.

 

I have found quite a few posting regarding performance issues with ESXi and LSI IR. Probably due to the write caching being disabled by default on these controllers.

 

You can change this with LSIUtil (http://www.lsi.com/DistributionSystem/AssetDocument/support/downloads/hbas/fibre_channel/hardware_drivers/LSIUtil%20Kit_1.60.zip), but by default this does not work on ESXi because it's slightly different:

 

/tmp # ./lsiutil

 

LSI Logic MPT Configuration Utility, Version 1.60, July 11, 2008

sh: /sbin/modprobe: not found

sh: /bin/mknod: not found

Couldn't open /dev/mptctl or /dev/mpt2ctl!

 

0 MPT Ports found

/tmp #

 

I made the following change to the source of LSIUtil and then it works on my ESXi 4.0:

 

--- lsiutil/mpt2sas_ctl.h       2008-04-17 20:30:40.000000000 +0200

+++ lsiutil.fix/mpt2sas_ctl.h   2009-07-05 22:58:58.000000000 +0200

@@ -56,8 +56,8 @@

/**

   

  • HACK - changeme (MPT_MINOR = 220 )

  */

-#define MPT2SAS_MINOR          MPT_MINOR+1

-#define MPT2SAS_DEV_NAME       "mpt2ctl"

#define MPT2SAS_MINOR          MPT_MINOR2

+#define MPT2SAS_DEV_NAME       "mptctl_sas"

#define MPT2_MAGIC_NUMBER      'm'

#define MPT2_IOCTL_DEFAULT_TIMEOUT (10) /* in seconds */

 

-


/tmp # ./lsiutil.new

 

LSI Logic MPT Configuration Utility, Version 1.60, July 11, 2008

sh: /sbin/modprobe: not found

sh: /bin/mknod: not found

 

1 MPT Port found

 

     Port Name         Chip Vendor/Type/Rev    MPT Rev  Firmware Rev  IOC

1.  /proc/mpt/ioc0    LSI Logic SAS1068E B3     105      011c0200     0

 

Select a device:  1

 

1.  Identify firmware, BIOS, and/or FCode

2.  Download firmware (update the FLASH)

4.  Download/erase BIOS and/or FCode (update the FLASH)

8.  Scan for devices

10.  Change IOC settings (interrupt coalescing)

13.  Change SAS IO Unit settings

16.  Display attached devices

20.  Diagnostics

21.  RAID actions

22.  Reset bus

23.  Reset target

42.  Display operating system names for devices

45.  Concatenate SAS firmware and NVDATA files

59.  Dump PCI config space

60.  Show non-default settings

61.  Restore default settings

66.  Show SAS discovery errors

69.  Show board manufacturing information

97.  Reset SAS link, HARD RESET

98.  Reset SAS link

99.  Reset port

e   Enable expert mode in menus

p   Enable paged mode

w   Enable logging

 

Main menu, select an option:  21

 

1.  Show volumes

2.  Show physical disks

3.  Get volume state

4.  Wait for volume resync to complete

23.  Replace physical disk

26.  Disable drive firmware update mode

27.  Enable drive firmware update mode

30.  Create volume

31.  Delete volume

32.  Change volume settings

33.  Change volume name

50.  Create hot spare

51.  Delete hot spare

99.  Reset port

e   Enable expert mode in menus

p   Enable paged mode

w   Enable logging

 

RAID actions menu, select an option:  32

 

Volume 0 is Bus 0 Target 0, Type IM (Integrated Mirroring)

 

Volume 0 Settings:  write caching disabled, auto configure

Volume 0 draws from Hot Spare Pools:  0

 

Enable write caching:  yes

Offline on SMART data: 

Auto configuration: 

Priority resync: 

Hot Spare Pools (bitmask of pool numbers): 

 

RAID ACTION returned IOCLogInfo = 00010005

 

RAID actions menu, select an option:  0

 

Main menu, select an option:  0

-


 

I was doing a slow SCP while enabling WC and you can see my improvement in the attached screenshot.

 

Let me know if it works for you.

Need help: How to retrieve/change password in ESXi 5.5 + VSpheres

$
0
0

Hello Helpful People in this community,

I'm running ESXi 5.5 and using VMware vSphere Client (see screenshot below), FREE Version (in German).

I can't log in anymore as root - although I'm quite sure I'm using the correct password (but then again ...

A 'connection error' window pops up, stating that the Login cannot be done due to a wrong user name or password.


I read that reinstalling is necessary in order to reset the password, but WHAT to reinstall? The VSphere Client or ESXi?

I guess it must be ESXi (presumably the password is stored there somewhere) - but if I reinstall, surely my virtual machines will get destroyed, correct?


Is there anyone among you who knows an elegant solution? Please ...I really mustn't lose my VMs ...


THANK YOU


Philipp

 

VSph.jpg

Is there still a free version of ESXI that is not just a trial for 60 Days?

$
0
0

And what version is that?

And how do I find it?


Cheers

ESXi 6.7 vSwitch VLAN tagging issue

$
0
0

Hello everyone,

 

I'm just tinkering with ESXi in my homelab and noticed the following issue:

  • the vNIC in the Voice portgroup can't pass traffic to the switch
  • in the ESXi Web Console it looks like vSwitch is recognizing the wrong MAC address for the second NIC attached to VLAN 10.

 

The switches (Cisco 2960G) are configured for VLAN trunking and in general VLANs work pretty well in my lab setting. The only issue ist the ESXi: the vNIC attached to the portgroup "Voice" with VLAN 10 does not seem to be able to pass traffic through the physical NIC to the switch (as mentioned above).

 

2019-03-25 ESXi vSwitch 01.jpg

 

2019-03-25 ESXi vSwitch.jpg

 

On the left side there's the output of the linux "ip addr sh" command. On the right side there's the vSwitch overwiew. Please note the MAC addresses are different in the linux VM but are the same in the vSwitch overview. Is this a possible bug or just a feature...?

 

I'm grateful for any hints to point me in the right direction...

 

 

Cheers,

 

Joerg

ESXi 6.7u1 patches - where to download

$
0
0

Hej,

 

I want to download the latest ESXi 6.7u1 patches ESXi670-201811001 and ESXi670-201901001 manually. However, for release 6.7, the list is empty in the patch portal. Is this only the case for me? Is anyone seeing the patches there? When I select ESXi 6.5 patches are showing up.

 

Where can I download the zip files manually?

 

Regards

Installing drivers on an installation medium

$
0
0

Im trying to install some HP drivers to my USB stick which has Esxi on it. Im trying to run Esxi on a HP Proliant DL360 G5 from an USB but the server wont see my HDD that I put in. Someone told me to install these HP drivers to help the server and OS to see the HDD. But I do not know how to this, i've looked around but it hasn't helped me much, can anyone help?

This virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure.

$
0
0

Hello

 

We are getting this error for only one guest i a 4 node HA cluster: This virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure. The 4 host have enough resources left.

 

Whay?

 

Error.JPG


vCPU vs vSocket ?

$
0
0

Hi all,

 

I am making a few CPU benchmarks on an host equipped with a single 6c processor.

 

- CPU set with 1 core on 6 sockets : 5400 pts.

- CPU set with 6 cores on 1 sockets : 11800 pts.

 

Both setting were set to allow this VM to use an unlimited amount of CPU resources and with an high priority.

 

But what monitoring showed me, is that the multi-socket settings allow only a CPU usage of about 60% on the host while the multi-core use 100% of the host's CPU.

 

That explain the large result difference but why is that behavior ?

 

Thank you.

PSOD: PF Exception 14

$
0
0

Good day everyone. My server crashed and I've got PSOD with following dump log:

2019-03-26T11:34:02.589Z cpu22:33279)@BlueScreen: #PF Exception 14 in world 33279:vmsyslogd IP 0x4180280bcfc5 addr 0x1189
PTEs:0xe9b509027;0xe9b50a027;0x0;
2019-03-26T11:34:02.590Z cpu22:33279)Code start: 0x418028000000 VMK uptime: 106:19:42:36.287
2019-03-26T11:34:02.590Z cpu22:33279)0x4390cff9b938:[0x4180280bcfc5]World_GetKillLevel@vmkernel#nover+0x1 stack: 0x41802821330f
2019-03-26T11:34:02.590Z cpu22:33279)0x4390cff9b948:[0x418028213cc1]CpuSchedWait@vmkernel#nover+0x15d stack: 0x418045901e00
2019-03-26T11:34:02.594Z cpu22:33279)base fs=0x0 gs=0x418045800000 Kgs=0x0
2019-03-26T11:34:02.594Z cpu22:33279)vmkernel             0x0 .data 0x0 .bss 0x0
2019-03-26T11:34:02.594Z cpu22:33279)chardevs             0x4180285b7000 .data 0x417fc0000000 .bss 0x417fc00003c0
2019-03-26T11:34:02.594Z cpu22:33279)user                 0x4180285be000 .data 0x417fc0400000 .bss 0x417fc040f3c0
2019-03-26T11:34:02.594Z cpu22:33279)vsanapi              0x41802868b000 .data 0x417fc0800000 .bss 0x417fc0802480
2019-03-26T11:34:02.594Z cpu22:33279)vsanbase             0x418028693000 .data 0x417fc0c00000 .bss 0x417fc0c08540
2019-03-26T11:34:02.594Z cpu22:33279)vprobe               0x4180286a0000 .data 0x417fc1000000 .bss 0x417fc100e540
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_mgmt          0x4180286e9000 .data 0x417fc1400000 .bss 0x417fc1400180
2019-03-26T11:34:02.594Z cpu22:33279)procfs               0x4180286ee000 .data 0x417fc1800000 .bss 0x417fc1800240
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_1_0_0_vmkernel_shim 0x4180286f1000 .data 0x417fc1c00000 .bss 0x417fc1c08a80
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_0_0_0_vmkernel_shim 0x4180286f7000 .data 0x417fc2000000 .bss 0x417fc2008100
2019-03-26T11:34:02.594Z cpu22:33279)iodm                 0x4180286fd000 .data 0x417fc2400000 .bss 0x417fc2400138
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_2_0_0_vmkernel_shim 0x418028701000 .data 0x417fc2800000 .bss 0x417fc280c800
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_2_0_0_mgmt_shim 0x418028707000 .data 0x417fc2c00000 .bss 0x417fc2c001a0
2019-03-26T11:34:02.594Z cpu22:33279)dma_mapper_iommu     0x418028708000 .data 0x417fc3000000 .bss 0x417fc3000080
2019-03-26T11:34:02.594Z cpu22:33279)vmkplexer            0x41802870b000 .data 0x417fc3400000 .bss 0x417fc3400260
2019-03-26T11:34:02.594Z cpu22:33279)vmklinux_9           0x41802870f000 .data 0x417fc3800000 .bss 0x417fc3808ec0
2019-03-26T11:34:02.594Z cpu22:33279)vmklinux_9_2_0_0     0x4180287a4000 .data 0x417fc3c00000 .bss 0x417fc3c07e84
2019-03-26T11:34:02.594Z cpu22:33279)vmklinux_9_2_1_0     0x4180287a7000 .data 0x417fc4000000 .bss 0x417fc4007f98
2019-03-26T11:34:02.594Z cpu22:33279)vmklinux_9_2_2_0     0x4180287aa000 .data 0x417fc4400000 .bss 0x417fc4408798
2019-03-26T11:34:02.594Z cpu22:33279)vmklinux_9_2_3_0     0x4180287ad000 .data 0x417fc4800000 .bss 0x417fc4808ad8
2019-03-26T11:34:02.594Z cpu22:33279)lsi_mr3              0x4180287b0000 .data 0x417fc4c00000 .bss 0x417fc4c002c0
2019-03-26T11:34:02.594Z cpu22:33279)elxnet               0x4180287cf000 .data 0x417fc5000000 .bss 0x417fc50007a0
2019-03-26T11:34:02.594Z cpu22:33279)iscsi_trans          0x418028808000 .data 0x417fc5400000 .bss 0x417fc5401800
2019-03-26T11:34:02.594Z cpu22:33279)iscsi_trans_compat_shim 0x418028814000 .data 0x417fc5800000 .bss 0x417fc580096c
2019-03-26T11:34:02.594Z cpu22:33279)iscsi_trans_incompat_shim 0x418028815000 .data 0x417fc5c00000 .bss 0x417fc5c007e4
2019-03-26T11:34:02.594Z cpu22:33279)etherswitch          0x418028816000 .data 0x417fc6000000 .bss 0x417fc6014d40
2019-03-26T11:34:02.594Z cpu22:33279)netsched             0x41802885a000 .data 0x417fc6400000 .bss 0x417fc6403d40
2019-03-26T11:34:02.594Z cpu22:33279)netioc               0x418028869000 .data 0x417fc6800000 .bss 0x417fc68000a0
2019-03-26T11:34:02.594Z cpu22:33279)random               0x41802886f000 .data 0x417fc6c00000 .bss 0x417fc6c00600
2019-03-26T11:34:02.594Z cpu22:33279)cnic_register        0x418028873000 .data 0x417fc7000000 .bss 0x417fc70001e0
2019-03-26T11:34:02.594Z cpu22:33279)igb                  0x418028875000 .data 0x417fc7400000 .bss 0x417fc7401f00
2019-03-26T11:34:02.594Z cpu22:33279)usb                  0x4180288a1000 .data 0x417fc7800000 .bss 0x417fc7801680
2019-03-26T11:34:02.594Z cpu22:33279)ehci-hcd             0x4180288c7000 .data 0x417fc7c00000 .bss 0x417fc7c002a0
2019-03-26T11:34:02.594Z cpu22:33279)xhci                 0x4180288d3000 .data 0x417fc8000000 .bss 0x417fc80003a0
2019-03-26T11:34:02.594Z cpu22:33279)hid                  0x4180288f2000 .data 0x417fc8400000 .bss 0x417fc84004e0
2019-03-26T11:34:02.594Z cpu22:33279)dm                   0x4180288f8000 .data 0x417fc8800000 .bss 0x417fc8800000
2019-03-26T11:34:02.594Z cpu22:33279)nmp                  0x4180288fb000 .data 0x417fc8c00000 .bss 0x417fc8c04010
2019-03-26T11:34:02.594Z cpu22:33279)vmw_satp_local       0x418028926000 .data 0x417fc9000000 .bss 0x417fc9000028
2019-03-26T11:34:02.594Z cpu22:33279)vmw_satp_default_aa  0x418028928000 .data 0x417fc9400000 .bss 0x417fc9400000
2019-03-26T11:34:02.594Z cpu22:33279)vmw_psp_lib          0x41802892a000 .data 0x417fc9800000 .bss 0x417fc9800290
2019-03-26T11:34:02.594Z cpu22:33279)vmw_psp_fixed        0x41802892c000 .data 0x417fc9c00000 .bss 0x417fc9c00000
2019-03-26T11:34:02.594Z cpu22:33279)vmw_psp_rr           0x41802892f000 .data 0x417fca000000 .bss 0x417fca000068
2019-03-26T11:34:02.594Z cpu22:33279)vmw_psp_mru          0x418028932000 .data 0x417fca400000 .bss 0x417fca400000
2019-03-26T11:34:02.594Z cpu22:33279)libata_92            0x418028934000 .data 0x417fca800000 .bss 0x417fca802660
2019-03-26T11:34:02.594Z cpu22:33279)libata_9_2_0_0       0x418028959000 .data 0x417fcac00000 .bss 0x417fcac01750
2019-03-26T11:34:02.594Z cpu22:33279)libata_9_2_1_0       0x41802895a000 .data 0x417fcb000000 .bss 0x417fcb001750
2019-03-26T11:34:02.594Z cpu22:33279)libata_9_2_2_0       0x41802895b000 .data 0x417fcb400000 .bss 0x417fcb401750
2019-03-26T11:34:02.594Z cpu22:33279)usb-storage          0x41802895c000 .data 0x417fcb800000 .bss 0x417fcb8049a0
2019-03-26T11:34:02.594Z cpu22:33279)healthchk            0x418028969000 .data 0x417fcbc00000 .bss 0x417fcbc12a00
2019-03-26T11:34:02.594Z cpu22:33279)teamcheck            0x41802897f000 .data 0x417fcc000000 .bss 0x417fcc012f40
2019-03-26T11:34:02.594Z cpu22:33279)vlanmtucheck         0x418028992000 .data 0x417fcc400000 .bss 0x417fcc412c40
2019-03-26T11:34:02.594Z cpu22:33279)heartbeat            0x4180289a7000 .data 0x417fcc800000 .bss 0x417fcc812e40
2019-03-26T11:34:02.594Z cpu22:33279)shaper               0x4180289bc000 .data 0x417fccc00000 .bss 0x417fccc14b80
2019-03-26T11:34:02.594Z cpu22:33279)lldp                 0x4180289d1000 .data 0x417fcd000000 .bss 0x417fcd000040
2019-03-26T11:34:02.594Z cpu22:33279)cdp                  0x4180289d6000 .data 0x417fcd400000 .bss 0x417fcd414080
2019-03-26T11:34:02.594Z cpu22:33279)ipfix                0x4180289ef000 .data 0x417fcd800000 .bss 0x417fcd813280
2019-03-26T11:34:02.594Z cpu22:33279)tcpip4               0x418028a05000 .data 0x417fcdc00000 .bss 0x417fcdc18380
2019-03-26T11:34:02.594Z cpu22:33279)dvsdev               0x418028b60000 .data 0x417fce000000 .bss 0x417fce000040
2019-03-26T11:34:02.594Z cpu22:33279)vmci                 0x418028b63000 .data 0x417fce400000 .bss 0x417fce4059c0
2019-03-26T11:34:02.594Z cpu22:33279)dvfilter             0x418028b88000 .data 0x417fce800000 .bss 0x417fce800b00
2019-03-26T11:34:02.594Z cpu22:33279)lacp                 0x418028ba9000 .data 0x417fcec00000 .bss 0x417fcec00180
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_0_0_0_dvfilter_shim 0x418028bb6000 .data 0x417fcf000000 .bss 0x417fcf000930
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_2_0_0_dvfilter_shim 0x418028bb7000 .data 0x417fcf400000 .bss 0x417fcf4009f0
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_1_0_0_dvfilter_shim 0x418028bb8000 .data 0x417fcf800000 .bss 0x417fcf8009f0
2019-03-26T11:34:02.594Z cpu22:33279)libfc_92             0x418028bb9000 .data 0x417fcfc00000 .bss 0x417fcfc00540
2019-03-26T11:34:02.594Z cpu22:33279)libfcoe_92           0x418028bd4000 .data 0x417fd0000000 .bss 0x417fd00001e0
2019-03-26T11:34:02.594Z cpu22:33279)libfc_9_2_0_0        0x418028bdd000 .data 0x417fd0400000 .bss 0x417fd0400868
2019-03-26T11:34:02.594Z cpu22:33279)libfcoe_9_2_0_0      0x418028bde000 .data 0x417fd0800000 .bss 0x417fd08001f4
2019-03-26T11:34:02.594Z cpu22:33279)libfc_9_2_1_0        0x418028bdf000 .data 0x417fd0c00000 .bss 0x417fd0c00868
2019-03-26T11:34:02.594Z cpu22:33279)libfcoe_9_2_1_0      0x418028be0000 .data 0x417fd1000000 .bss 0x417fd10001f4
2019-03-26T11:34:02.594Z cpu22:33279)ahci                 0x418028be1000 .data 0x417fd1400000 .bss 0x417fd1400420
2019-03-26T11:34:02.594Z cpu22:33279)esxfw                0x418028be9000 .data 0x417fd1800000 .bss 0x417fd1813940
2019-03-26T11:34:02.594Z cpu22:33279)dvfilter-generic-fastpath 0x418028c00000 .data 0x417fd1c00000 .bss 0x417fd1c13080
2019-03-26T11:34:02.594Z cpu22:33279)vmkibft              0x418028c1c000 .data 0x417fd2000000 .bss 0x417fd2003960
2019-03-26T11:34:02.594Z cpu22:33279)lvmdriver            0x418028c20000 .data 0x417fd2400000 .bss 0x417fd2403500
2019-03-26T11:34:02.594Z cpu22:33279)deltadisk            0x418028c3a000 .data 0x417fd2800000 .bss 0x417fd2807e40
2019-03-26T11:34:02.594Z cpu22:33279)vdfm                 0x418028c71000 .data 0x417fd2c00000 .bss 0x417fd2c001c0
2019-03-26T11:34:02.594Z cpu22:33279)tracing              0x418028c76000 .data 0x417fd3000000 .bss 0x417fd3006380
2019-03-26T11:34:02.594Z cpu22:33279)rdt                  0x418028c7f000 .data 0x417fd3400000 .bss 0x417fd3405840
2019-03-26T11:34:02.594Z cpu22:33279)vsanutil             0x418028cba000 .data 0x417fd3800000 .bss 0x417fd380a5c0
2019-03-26T11:34:02.594Z cpu22:33279)lsomcommon           0x418028ce3000 .data 0x417fd3c00000 .bss 0x417fd3c01960
2019-03-26T11:34:02.594Z cpu22:33279)plog                 0x418028d23000 .data 0x417fd4000000 .bss 0x417fd4008470
2019-03-26T11:34:02.594Z cpu22:33279)gss                  0x418028dd0000 .data 0x417fd4400000 .bss 0x417fd4402ad8
2019-03-26T11:34:02.594Z cpu22:33279)vmfs3                0x418028df6000 .data 0x417fd4800000 .bss 0x417fd4803b00
2019-03-26T11:34:02.594Z cpu22:33279)sunrpc               0x418028e73000 .data 0x417fd4c00000 .bss 0x417fd4c03880
2019-03-26T11:34:02.594Z cpu22:33279)virsto               0x418028e8d000 .data 0x417fd5000000 .bss 0x417fd5000920
2019-03-26T11:34:02.594Z cpu22:33279)lsom                 0x418028eff000 .data 0x417fd5400000 .bss 0x417fd540a980
2019-03-26T11:34:02.594Z cpu22:33279)vfat                 0x418028fdb000 .data 0x417fd5800000 .bss 0x417fd5802800
2019-03-26T11:34:02.594Z cpu22:33279)ufs                  0x418028fe6000 .data 0x417fd5c00000 .bss 0x417fd5c008c0
2019-03-26T11:34:02.594Z cpu22:33279)dvfg-igmp            0x418028ff9000 .data 0x417fd6000000 .bss 0x417fd6000208
2019-03-26T11:34:02.594Z cpu22:33279)cmmds_net            0x418028fff000 .data 0x417fd6400000 .bss 0x417fd6403740
2019-03-26T11:34:02.594Z cpu22:33279)cmmds                0x418029012000 .data 0x417fd6800000 .bss 0x417fd6805e60
2019-03-26T11:34:02.594Z cpu22:33279)cmmds_resolver       0x41802907f000 .data 0x417fd6c00000 .bss 0x417fd6c00110
2019-03-26T11:34:02.594Z cpu22:33279)vsan                 0x41802908f000 .data 0x417fd7000000 .bss 0x417fd7017500
2019-03-26T11:34:02.594Z cpu22:33279)vmklink_mpi          0x4180291f0000 .data 0x417fd7400000 .bss 0x417fd74025c0
2019-03-26T11:34:02.594Z cpu22:33279)swapobj              0x4180291f6000 .data 0x417fd7800000 .bss 0x417fd7803268
2019-03-26T11:34:02.594Z cpu22:33279)nfsclient            0x4180291ff000 .data 0x417fd7c00000 .bss 0x417fd7c03ca0
2019-03-26T11:34:02.594Z cpu22:33279)nfs41client          0x41802921a000 .data 0x417fd8000000 .bss 0x417fd8005600
2019-03-26T11:34:02.594Z cpu22:33279)vflash               0x41802927e000 .data 0x417fd8400000 .bss 0x417fd8403700
2019-03-26T11:34:02.594Z cpu22:33279)vmkapei              0x418029289000 .data 0x417fd8800000 .bss 0x417fd8800b20
2019-03-26T11:34:02.594Z cpu22:33279)procMisc             0x418029292000 .data 0x417fd8c00000 .bss 0x417fd8c00000
2019-03-26T11:34:02.594Z cpu22:33279)ipmi_msghandler      0x418029293000 .data 0x417fd9000000 .bss 0x417fd90005e0
2019-03-26T11:34:02.594Z cpu22:33279)ipmi_si_drv          0x41802929c000 .data 0x417fd9400000 .bss 0x417fd9400660
2019-03-26T11:34:02.594Z cpu22:33279)ipmi_devintf         0x4180292a7000 .data 0x417fd9800000 .bss 0x417fd9800180
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_2_0_0_nmp_shim 0x4180292aa000 .data 0x417fd9c00000 .bss 0x417fd9c00d68
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_0_0_0_nmp_shim 0x4180292ab000 .data 0x417fda000000 .bss 0x417fda000ce8
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_1_0_0_nmp_shim 0x4180292ac000 .data 0x417fda400000 .bss 0x417fda400ce8
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_0_0_0_iscsi_shim 0x4180292ad000 .data 0x417fda800000 .bss 0x417fda800970
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_1_0_0_iscsi_shim 0x4180292ae000 .data 0x417fdac00000 .bss 0x417fdac00970
2019-03-26T11:34:02.594Z cpu22:33279)vmkapi_v2_2_0_0_iscsi_shim 0x4180292af000 .data 0x417fdb000000 .bss 0x417fdb000970
2019-03-26T11:34:02.594Z cpu22:33279)ftcpt                0x4180292b0000 .data 0x417fdb400000 .bss 0x417fdb402fc0
2019-03-26T11:34:02.594Z cpu22:33279)hbr_filter           0x4180292ed000 .data 0x417fdb800000 .bss 0x417fdb8002c0
2019-03-26T11:34:02.594Z cpu22:33279)vmkstatelogger       0x418029319000 .data 0x417fdbc00000 .bss 0x417fdbc03840
2019-03-26T11:34:02.594Z cpu22:33279)svmmirror            0x418029340000 .data 0x417fdc000000 .bss 0x417fdc000100
2019-03-26T11:34:02.594Z cpu22:33279)cbt                  0x41802934d000 .data 0x417fdc400000 .bss 0x417fdc400080
2019-03-26T11:34:02.594Z cpu22:33279)migrate              0x418029351000 .data 0x417fdc800000 .bss 0x417fdc805100
2019-03-26T11:34:02.594Z cpu22:33279)filtmod              0x4180293bc000 .data 0x417fdcc00000 .bss 0x417fdcc03140
2019-03-26T11:34:02.594Z cpu22:33279)vfc                  0x4180293ca000 .data 0x417fdd000000 .bss 0x417fdd002c80
Coredump to disk. 
2019-03-26T11:34:02.644Z cpu22:33279)Slot 1 of 1.
2019-03-26T11:34:02.644Z cpu22:33279)Dump: 2347: Using dump slot size 2684354560.

 

I've googled a lot but not found anything describing specifically my case. Please, help me understand where the problem is. Thanks in any advance.

upgrade to vmfs 6

$
0
0

I will update the datastores to vmfs 6. What I wonder about is this. when I delete a file from the virtual machine, will I get space from the storage space. For example, if you delete 500 gb files from within 1 tb windows server. 500 gb area from storage unit. will storage reclamation now an automatic or is this only when the vmdk is deleted ? Is the storage brand model important for automatic reclamation ?

 

I have 6.5 u2 vcenter and esxi server and also 2 EMC vnx with EMC vplex .

 

thanks

Windows 2012 VM drops off network

$
0
0

This is going to be a long one. We have a ticket open with Vmware, hardware vendors, and Microsoft on this one. Going in circles

 

We have 2 VMs running on the same host.  Over the past 2 weeks, these VMs have dropped off the network about 5 times. To fix it, you have to disable and re-enable the NIC

 

  • While the VM is down, you can't ping anything. Not the gateway, not another VM on the same host on the same virtual switch, not another VM in the environment.
  • Vmotion to another host doesn't fix it
  • The VM is not under heavy load when this happens.
  • Ipconfig /all look normal.
  • Disconnecting and re-connecting the NIC at the VM hardware level doesn't fix it
  • There are entries in the Windows event logs that are a symptom of the VM being disconnected from the network (Can't resolve host names, authentication errors, etc) but, there is no event showing that the NIC disconnected.

 

 

 

This is on a Vblock. VCE has looked at everything hardware-wise and see nothing. No indication that there was even an outage.

 

Vmware has looked at everything and see nothing on the ESXi side. Vmware tools is up to date.

 

Microsoft is looking at it and still can't find anything.

 

 

Any thoughts? Any ideas on what else I can test for?

Configure Link Aggregation without vDS

$
0
0

Hi all,

 

Is it possible to configure a single esxi host networking with link aggregation without vDS?

newly created user

$
0
0

i created new user into my ESXi 6.5 host.

roles and permission assigned, but i still cannot login.

error message is Cannot complete due to incorrect user name or password.

 

have already followed the recommended steps to create new users and assigned roles through the web gui

Vsphere 6.5 U1 update on Windows fails

$
0
0

Installation of component VCSServiceManager failed with error code '1603'. Check the logs for more details

I tried to turn on all dot net

 


root user not able to login

$
0
0

does root user account gets locked out after certain wrong password attempts?

i noticed that i cannot login again after a few wrong password login.

need to reboot the ESXi host in order to login again.

my ESX version is 6.5.0.

 

error message is "Cannot complete lofin due to an incorrect user name or password.

how doi prevent root user from getting locked out?

Is there any way to upgrade from ESXi 5.5 to 6.0 without reboot or stopping running VM(s) ? Thanks

$
0
0

Is there any way to upgrade from ESXi 5.5 to 6.0 without reboot or stopping running VM(s) ? Thanks

VCENTER 5.5 | Edition of vcenter on supports 3 host(s)

$
0
0

Pictures:

 

Before all three hosts ran without a problem. Until the hard drive from the vcenter was full and i uncovered memory space again.

After the restart, this error occurs.

Anmerkung 2019-03-27 082634.png

The VCenter Hosts, look like:

(All VMs from esxi, are online, but i cant manage them becouse of the license problem)

 

Anmerkung 2019-03-27 083116.png

My Licences Look like: (Licence and Domain are censored)

Anmerkung 2019-03-27 083258.png

 

I cant fix that error ... anybody can help me please?

 

Regards

Windows vcenter 6.5 to VCSA 6.7 Migration (Compactability)

$
0
0

Hi Folks,

 

Am planning to migrate Windows vcenter 6.5 to VCSA 6.7 U1, Currently am using veeam backup to take full backup(Host level, VM level, Object level).

After migrate this to new VCSA appliance is there any changes we need to make for backup configuration from vcenter side or veeam side? Please advise?

 

Arjun EK

accidentally disabled ethernet adapter

$
0
0

Hello guys, can you help me. I accidentally set disable to my physical Gigabyte ethernet adapter in "manage", and after reboot my ESXi 6.7 shows me: "No compatible network adapter found", and i lost web-client connection

How could i enable adapter via CLI

esxcfg-nic -l shows nothing

esxcfg-info shows physical adapter

in the /etc/vmware/esx.conf i see all my network settings

Viewing all 8313 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>