esxi 6.5 lsusb show usb device but host only show avocent mass storage function as available usb device
Problem adding physical SATA disk as RDM in ESXi 6.7
I followed the instructions in this guide to add the disk to my Windows 2016 server guest VM.
How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs · GitHub
So, the drive appears and it recognizes the size in Windows. However, the problem is that it shows as a RAW partition in Windows although the drive has data in it. I don't recall if it is formatted as NTFS or exFAT, but the partition scheme is GPT. The drive is 4TB and has a single partition.
Any ideas?
Unable to restart Management Network in 6.7 DCUI
ssh into 6.7
TERM=xterm dcui
F2 login
select Restart Management Network
F11 OK
Results in Management Network not restarted
Is this normal?
Are vmware patches cumulative ?
Hi Folks
I am about to update vcenter and ESXIs from vsphere 6.0 to update 3. Just curious if I have to install Update 1 and 2 in sequence or can I install update 3 directly.
How can I export vhost on windows?
ESXI 5.5.0 Bootbank cannot be found at path '/bootbank' (2018-05-20T10:01:01.199Z cpu1:169363213) warning 5/20/2018 6:01:01 AM
This is on a Dell Poweredge T110, checking a client system find this error. Saw some articles on a bad flash reader specific to HP, and others after upgrading to 6.x but nothing about what it's really telling me. Is the USB boot drive going (or gone) bad? Something else? If the system reboots, is it going to be able to start?
Thanks for any input or suggestions.
Mick
SMTP configuration ESXi 6.5
Where can I configure smtp mail to obtain alert in my web console? Thank you
Unable to access web interface of ESXI server 6.5
Unable to access ESXI 6.5 web interface. It shows that the site is unreachable.
Unable to execute command from shell as well. Received error
IO error: [Errno 111] Connection Refused.
All of these happened after trying to import an SSL cert into ESXI. Any idea what is happening or how to fix it? Tried rebooting but issue still persist.
ESXi realtime performance chart don`t show in vCenter 5.1a
Hi expert
My vcenter server don't show esxi performance data.
I found KB:2009291
Cannot view the realtime Performance Overview charts after upgrading to vCenter Server 5.0 (2009291)
But My vcenter version is 5.1a. And clean install image , not upgrade.
I can show realtime performance using direct access esxi using vSphere Client
Could you tell me why can not retrieve performance data from ESXi?
Thank you
Hironori
Atom C3758/X553 GbE driver for ESXi 6.5/6.7
A few days ago, I bought Atom C3758 based Mother Board.
Unfortunately, ESXi 6.5/6.7 have no inbox driver for X553 GbE.
So I build x553 only driver from ixgbe-5.3.6 linux driver.
I'm using it with following parts.
Asrock Rack C3758D4I-4L
Kingston KVR24R17D4K4/128
Intel ET Dual Port Server Adapter
(initial connection by igb inbox driver)
Transcend JetFlash 720S
SilverStone SST-CS280 / SST-ST45SF
I have tested with IPv4, MTU9000, no VLAN only. (mainly iSCSI connections)
Other setting may cause problems.
However, if you are interested in the driver, try it.
To use this driver:
# esxcli software acceptance set --level=CommunitySupported
# esxcli software vib install -v /tmp/net65-x553-5.3.6.x86_64.vib
What server (manufacturer & model) works best for vmware esxi?
Hello, I am currently using a Dell PowerEdge T410 running vmware esxi for a small company server services. I have 4 vms running on it. It is rather dated and wanted to update to a newer server. This is the first server Ive worked with so not very familiar with where to go from here to upgrade it. Is there an official make and model that works best for esxi? If not what server would you suggest for a new vmware esxi build?
USB 3.0 not available for macOS
I've installed macOS Sierra and High Sierra, and I'm trying to connect an iOS device using USB (lighting) cable through VMWare remote console to the virtual machine.
But for some reason USB 3.0 doesn't show up in the VM settings.
How can I add USB 3.0 to the settings of a VM that is running macOS?
Configure delay on ESXi bootloader
Hi
Throughout a subject
Configure delay on ESXi bootloader
Whether there is a way to change after all (5 years) these "5(4) seconds" in bootloader ESXi here
Yours faithfully, Pavel
VM utilization cause high CPU in a different VM
Hello vmWare community,
Please note, I have placed all of the config information at the end of this post so I can jump right into the issue discussion.
Basically, the situation is that at certain times, when one VM (VM-bad) becomes highly active, the CPU utilization on another VM (VM-good) climbs despite not having any additional load. In extreme situations, the CPU load on VM-good gets so high as to incur a DoS event and traffic through VM-good ceases. VM-good does eventually recover but only after VM-bad has calmed down. The issue is intermittent in nature and only seems to occur when VM-bad is booting or at other select periods of high-activity.
After much troubleshooting and stat collection (using esxtop --> CSV files; parsed w/ perfmon), it appears that disk access\utilization is the root cause. I say this because the stats paint a picture of very high IOPS (peak > 150) and long read\write times (peak > 2500ms) for both VMs. The core apps running on VM-bad require near constant disk access due to it performing full packet capture as well as pretty constant updates to several mySQL databases. Now I'm sure those reading this are thinking "Well DUH - yeah, that's your issue" and I'm pretty sure I don't disagree but I do have some questions re: how ESXi handles shared resources and where my expectations are inaccurate.
Additionally, I think I was able to validate this as being a disk access issue because after migrating the packet capture destination and mySQL databases to a second mechanical HDD, VM-good's CPU utilization and DoS issues seem to have pretty much gone away. Even when VM-bad's disk latency times run high (i.e. 600+ ms), CPU utilization on VM-good remains nearly constant and packets flow unimpeded. I have not captured a new set of esxtop stats post-migration but I may if VM-good starts acting up again.
As an aside, prior to above-mentioned migration, I did tinker with the various tuning options such as IOPS, CPU allocation (both p- and v- CPUs), CPU cycles, etc. None of those seemed to help. However, if considering this in the context of the suspected root cause (disk latency), this probably makes sense. My belief is that most disk latency issues - and especially those in ESXi - are not really solved as much by tuning as they are by adding more spindles, SSDs, or implementing RAID (i.e. RAID 10 or 0).
So, with all of the above as backstory, what I'm really looking for here is some clarification and validation\correction about my findings & expectations:
- Overall, does it make sense for high-activity in one VM to directly cause a measurable CPU utilization increase in a different VM? I think it does if I'm correct about disk latency being the root cause but need some validation and\or elaboration here.
- If #1 does make sense, I expected at least some improvement by tuning the VMs via IOPS, etc. reservation and\or limitation but none of those seemed to have any real effect at all...but maybe this is expected if the root cause is disk latency.
- Also, despite VM-bad's high resource demands, I expected ESXi to do a better job balancing the load & resources especially given how much overhead this system has (please see specs below for more details). But based on my observations & testing, it appears my expectation was inaccurate so please fill me in here.
- My theory on why there is a directly measurable CPU utilization increase on VM-good is that, due to increased disk latency, VM-good's CPU is getting bogged down by having to mange it's own set of resource issues such as buffers filling, etc. Does this make sense?
- But...if #4 does make sense, then why doesn't VM-bad also have issues??? That's part of what makes this kinda strange - the actual VM that is bogged down still gets it's own job done...packets are not dropped, packet capture is flawless, and the DBs all seem to have the expected set of data and do not get corrupted.
- Finally, if the root cause is disk latency, is there any other tuning that can be done in order to not require the disk migration I performed? I am pretty sure the answer is no and that more spindles, faster drives, RAID, etc. are the only true remedies to disk latency issues.
Please comment where appropriate and thanks in advance for taking the time to read and respond.
Platform
- Dell PowerEdge R710.
- 1x 1TB WD Red 3.5" e-SATA mechanical HDD - NO RAID.
- 32GB RAM.
- 8x GE nics.
- 2x Xeon X5550 4-core CPUs - total of 16 vCPUs (due to HT).
- Hyper-threading & other virtualization settings are enabled in system BIOS.
- ESXi v6.5.
- NO OVER-SUBSCRIPTION is done on this host.
ESXi base config
- 2x VMs - both running xNIX - VM-bad = Ubuntu 14.04; VM-good = FreeBSD variant.
VM-bad config
- 16GB RAM, 8GB swap; 500GB disk space; 4 vCPUs across 2 cores; 2x GE NICs.
- CPU utilization varies widely from 2-99%; the latter occurs infrequently during periods of high-activity such as boot, DB cleanup, PCAP trimming, etc.
- RAM in use hovers between 13-16GB w\ very little swap being used (< 300MB).
- All resource allocations meet or exceed those specified by the devs.
- All resource tuning settings are at default (no limitations OR reservations).
- open-vm-tools & daemon v9.4.0.25793 (build-1280544) installed; daemon is running.
VM-good config
- 4GB RAM; 0 swap; 150GB disk space; 2 vCPU across 2 cores; 3x GE NICs.
- CPU utilization prior to migratin varied widely from 2-99% in lock step with VM-bad's disk latency times; post migration utilization varies between 2-35% and appear independent of VM-bad's behavior.
- RAM in use hovers between 1-2GB; no swap.
- All resource allocations meet or exceed those specified by the devs.
- All resource tuning settings are at default (no limitations OR reservations).
- vmWare tools daemon v10.0.5.52125 (build-3227872) installed & running.
Supermicro X8DT3 fans works on max speed , esxi 6.5
Hello.
I've a problem - install esxi 6.5 on Supermicro X8DT3 and it work fine, except terrible fans noise. Its run over 16000 RPM.
From IPMI:
The same data in BIOS.
But in esxi:
So it look like wrong sensors data reading in esxi.
I'm update BIOS and IPMI to the last available version. Set Energy Save Mode in Bios and in Esxi. Set in ipmi thresholds for fans to 10000 RPM:
But fans still scream.
May be someone know what to do?
p.s. sorry for my poor english
La configuration de la VM a été rejetée. Reportez-vous à la console du navigateur.
Bonjour,
J'ai ce message d'erreur même avec les paramètres par défaut avec l'assistant du client web d'ESXi, j'ai essayé Chrome, IE, Firefox même erreur.
Le problème s'est produit avec ma version d'origine:
VMware ESXi 6.5.0 build-7967591
VMkernel XXXXXXXXX 6.5.0 #1 SMP Release build-7967591 Mar 7 2018 23:06:14 x86_64 x86_64 x86_64 ESXi
et après mise à jour:
VMware ESXi 6.5.0 build-8294253
VMkernel XXXXXXXXX 6.5.0 #1 SMP Release build-8294253 Apr 17 2018 19:05:39 x86_64 x86_64 x86_64 ESXi
Quel peut-être le problème?
Précision, lorsque l'assistant aborde la phase réglages, aucune interface réseau n’apparaît dans la liste,
si je supprime cette instance vide et que je demande la création d'une interface, l'erreur suivante survient:
Malheureusement, une erreur inattendue s'est produite.
Le client peut continuer de fonctionner, mais nous vous recommandons plutôt d'actualiser votre navigateur et d'envoyer un rapport de bogue.
Appuyez sur la touche Échap pour masquer cette boîte de dialogue et continuer sans l'actualiser.
Détails:
Cause : TypeError: Cannot read property 'value' of undefined
Version : 1.27.1
Build : 7909286
ESXi : 6.5.0
Navigateur : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36
Pile d'exception :
TypeError: Cannot read property 'value' of undefined
at n.$scope.addNetwork (https://XXXXXXXXX /ui/scripts/main.js:372:5697)
at onClick (https://XXXXXXXXX /ui/scripts/main.js:369:10004)
at n.scope.clickAction (https://XXXXXXXXX /ui/scripts/main.js:460:9897)
at https://XXXXXXXXX /ui/scripts/main.js:321:23160
at https://XXXXXXXXX /ui/scripts/main.js:429:289
at n.$eval (https://XXXXXXXXX /ui/scripts/main.js:320:17497)
at n.$apply (https://XXXXXXXXX /ui/scripts/main.js:320:17723)
at HTMLAnchorElement.<anonymous> (https://XXXXXXXXX /ui/scripts/main.js:429:271)
at HTMLAnchorElement.dispatch (https://XXXXXXXXX /ui/scripts/main.js:317:14464)
at HTMLAnchorElement.r.handle (https://XXXXXXXXX /ui/scripts/main.js:317:11251)
Une recherche sur le sujet ne me donne pas trop d'explication:
Cordialement,
Gilles
Repeated com.vmware.vcHms messages to install ESXi 6
We are getting very similar results as this thread.
Repeated com.vmware.vcHms messages to install vr2c-firewall.vib
As all our hosts are attempting to install this firewall VIB ever 4 hours (according to the esxupdate.log file on the host).
The vSphere Console is showing the following. install followed by update SSL thumbprint registry.
I am trying to determine what the install is, why it happens so often and if it is an issue or not.
Below you will find a selection of the esxupdate.log file edited with our IP Addresses and domain name out.
2015-11-13T16:34:28Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/esxcfg-advcfg', '-q', '-g', '/UserVars/EsximageNetTimeout']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:28Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/esxcfg-advcfg', '-q', '-g', '/UserVars/EsximageNetRetries']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:28Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/esxcfg-advcfg', '-q', '-g', '/UserVars/EsximageNetRateLimit']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:28Z esxupdate: esxupdate: INFO: ---
Command: update
Args: ['update']
Options: {'nosigcheck': None, 'retry': 5, 'loglevel': None, 'cleancache': None, 'viburls': ['https://vRA IP ADDRESS:8043/vib/vr2c-firewall.vib'], 'meta': None, 'proxyurl': 'dalvsphere01.contoso.com:80', 'timeout': 30.0, 'cachesize': None, 'hamode': True, 'maintenancemode': None}
2015-11-13T16:34:29Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/bootOption', '-rp']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:29Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/bootOption', '-ro']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:29Z esxupdate: imageprofile: INFO: Adding VIB VMware_locker_tools-light_6.0.0-1.17.3029758 to ImageProfile (Updated) HP-ESXi-6.0.0-iso-600.9.1.39
2015-11-13T16:34:29Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/esxcfg-advcfg', '-U', 'host-acceptance-level', '-G']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:29Z esxupdate: downloader: DEBUG: Downloading from https://vRA IP ADDRESS:8043/vib/vr2c-firewall.vib...
2015-11-13T16:34:29Z esxupdate: Transaction: INFO: Skipping installed VIBs VMware_bootbank_vr2c-firewall_6.1.0.10819-3051487
2015-11-13T16:34:29Z esxupdate: Transaction: INFO: Final list of VIBs being installed:
2015-11-13T16:34:29Z esxupdate: HostImage: DEBUG: Staging image profile [(Updated) HP-ESXi-6.0.0-iso-600.9.1.39]
2015-11-13T16:34:29Z esxupdate: HostImage: DEBUG: VIBs in image profile: VMware_bootbank_sata-sata-sil_2.3-4vmw.600.0.0.2494585, VMware_bootbank_lsi-mr3_6.605.08.00-7vmw.600.1.17.3029758, VMware_bootbank_scsi-ips_7.12.05-4vmw.600.0.0.2494585, Hewlett-Packard_bootbank_scsi-hpdsa_5.5.0.12-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_char-hpcru_6.0.6.14-1OEM.600.0.0.2159203, Hewlett-Packard_bootbank_hponcfg_6.0.0.04-00.13.17.2159203, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hpssacli_2.0.23.0-5.5.0.1198610, VMware_bootbank_sata-sata-promise_2.12-3vmw.600.0.0.2494585, VMware_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.600.0.0.2494585, VMware_bootbank_sata-sata-nv_3.5-4vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_scsi-mptsas_4.23.01.00-9vmw.600.0.0.2494585, VMware_bootbank_sata-sata-svw_2.3-3vmw.600.0.0.2494585, VMwa
2015-11-13T16:34:29Z esxupdate: re_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-2vmw.600.0.11.2809209, Hewlett-Packard_bootbank_hp-conrep_6.0.0.1-0.0.13.2159203, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.600.0.0.2494585, VMware_bootbank_esx-tboot_6.0.0-0.0.2494585, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.600.0.0.2494585, Brocade_bootbank_scsi-bfa_3.2.3.0-1OEM.550.0.0.1198610, VMware_bootbank_cpu-microcode_6.0.0-0.0.2494585, VMware_bootbank_xhci-xhci_1.0-2vmw.600.1.17.3029758, VMware_bootbank_scsi-fnic_1.5.0.45-3vmw.600.0.0.2494585, VMware_bootbank_net-tg3_3.131d.v60.4-1vmw.600.0.0.2494585, VMware_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.600.0.11.2809209, VMware_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.600.0.0.2494585, VMware_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.600.0.0.2494585, VMware_bootbank_lsu-hp-hpsa-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_rste_2.0.2.0088-4vmw.600.0.0.2494585, VMware_bootbank_sata-ahci_3.0-22vmw.600.
2015-11-13T16:34:29Z esxupdate: 1.17.3029758, VMware_bootbank_block-cciss_3.6.14-10vmw.600.0.0.2494585, VMware_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.600.0.0.2494585, VMware_bootbank_net-forcedeth_0.61-2vmw.600.0.0.2494585, VMware_bootbank_nmlx4-en_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.600.0.0.2494585, VMware_bootbank_ata-pata-via_0.3.3-2vmw.600.0.0.2494585, VMware_bootbank_vr2c-firewall_6.1.0.10819-3051487, VMware_bootbank_esx-base_6.0.0-1.20.3153772, VMware_bootbank_emulex-esx-elxnetcli_10.2.309.6v-0.0.2494585, VMware_locker_tools-light_6.0.0-1.17.3029758, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.600.0.0.2494585, VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_net-cnic_1.78.76.v60.13-2vmw.600.0.0.2494585, VMware_bootbank_net-e1000_8.0.3.1-5vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hp-ams_60
2015-11-13T16:34:29Z esxupdate: 0.10.1.0-32.2159203, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.600.0.0.2494585, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.600.0.0.2494585, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.600.0.0.2494585, VMware_bootbank_ima-qla4xxx_2.02.18-1vmw.600.0.0.2494585, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-2vmw.600.0.11.2809209, VMware_bootbank_lsi-msgpt3_06.255.12.00-8vmw.600.1.17.3029758, VMware_bootbank_vmware-fdm_6.0.0-2656760, VMware_bootbank_lsu-lsi-mpt2sas-plugin_1.0.0-4vmw.600.1.17.3029758, VMWARE_bootbank_mtip32xx-native_3.8.5-1vmw.600.0.0.2494585, VMware_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.600.0.0.2494585, VMware_bootbank_nvme_1.0e.0.35-1vmw.600.1.17.3029758, Hewlett-Packard_bootbank_hpacucli_9.20-9.0, VMware_bootbank_nmlx4-core_3.0.0.0-1vmw.600.0.0.2494585, VMware_bootbank_esx-xserver_6.0.0-0.0.2494585, VMware_bootbank_sata-ata-piix_2.12-10vmw.600.0.0.2494585, VMware_bootbank_qlnativ
2015-11-13T16:34:29Z esxupdate: efc_2.0.12.0-5vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hp-smx-provider_600.03.07.00.23-2159203, VMware_bootbank_lsu-lsi-mptsas-plugin_1.0.0-1vmw.600.0.0.2494585, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.600.0.0.2494585, VMware_bootbank_misc-drivers_6.0.0-1.17.3029758, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.600.0.0.2494585, VMware_bootbank_net-nx-nic_5.0.621-5vmw.600.0.0.2494585, VMware_bootbank_net-e1000e_2.5.4-6vmw.600.0.0.2494585, VMware_bootbank_sata-sata-sil24_1.1-1vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hp-esxi-fc-enablement_550.2.1.8-1198610, VMware_bootbank_ata-pata-amd_0.3.10-3vmw.600.0.0.2494585, VMware_bootbank_net-bnx2_2.2.4f.v60.10-1vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.600.0.0.2494585, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.600.0.0.2494585, Emulex_bootbank_ima-be2iscsi_10.2.250.1-1OEM.550.0.0.1331820, QLogic_bootbank_net-qlcnic_5.5.190-
2015-11-13T16:34:29Z esxupdate: 1OEM.550.0.0.1331820, Emulex_bootbank_scsi-be2iscsi_10.2.250.1-1OEM.550.0.0.1331820, Hewlett-Packard_bootbank_hpnmi_600.2.3.14-2159203, VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585, VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hp-esx-license_1.0-05, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.600.0.0.2494585, Hewlett-Packard_bootbank_char-hpilo_600.9.0.2.8-1OEM.600.0.0.2159203, VMware_bootbank_lpfc_10.2.309.8-2vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hp-build_600.9.1.39-2159203, VMware_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.600.0.0.2494585, VMware_bootbank_net-igb_5.0.5.1.1-5vmw.600.0.0.2494585, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.600.0.0.2494585, VMware_bootbank_scsi-mptspi_4.23.01.00-9vmw.600.0.0.2494585, Hewlett-Packard_bootbank_scsi-hpvsa_5.5.0-90OEM.550.0.0.1331820, Hewlett-Packard_bootbank_hpbootcfg_6.0.0.02-01.00.11.2159203, H
2015-11-13T16:34:29Z esxupdate: ewlett-Packard_bootbank_vmware-esx-hp_vaaip_p2000_2.1.0-2, VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-0.0.2494585, VMware_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.600.0.0.2494585, VMware_bootbank_elxnet_10.2.309.6v-1vmw.600.0.0.2494585, VMware_bootbank_scsi-aic79xx_3.1-5vmw.600.0.0.2494585, Hewlett-Packard_bootbank_hptestevent_6.0.0.01-00.00.8.2159203, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.600.0.0.2494585, VMware_bootbank_vsanhealth_6.0.0-3000000.2.0.1.17.2972216, VMware_bootbank_net-enic_2.1.2.38-2vmw.600.0.0.2494585, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.600.0.0.2494585, VMware_bootbank_scsi-hpsa_6.0.0.44-4vmw.600.0.0.2494585
2015-11-13T16:34:29Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/usr/sbin/vsish', '-e', '-p', 'cat', '/hardware/bios/dmiInfo']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:29Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/smbiosDump']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-11-13T16:34:29Z esxupdate: imageprofile: DEBUG: VIB VMware_locker_tools-light_6.0.0-1.17.3029758 is being removed from ImageProfile (Updated) HP-ESXi-6.0.0-iso-600.9.1.39
2015-11-13T16:34:29Z esxupdate: HostImage: INFO: Nothing for LiveImageInstaller to do, skipping.
2015-11-13T16:34:29Z esxupdate: imageprofile: DEBUG: VIB VMware_locker_tools-light_6.0.0-1.17.3029758 is being removed from ImageProfile (Updated) HP-ESXi-6.0.0-iso-600.9.1.39
2015-11-13T16:34:29Z esxupdate: HostImage: INFO: Nothing for BootBankInstaller to do, skipping.
2015-11-13T16:34:29Z esxupdate: HostImage: INFO: Nothing for LockerInstaller to do, skipping.
2015-11-13T16:34:29Z esxupdate: HostImage: DEBUG: Host is remediated by installer:
2015-11-13T16:34:29Z esxupdate: esxupdate: DEBUG: <<<
[root@adsesx16:/var]
ESXi 6.5 -> ESXi 6.7 Upgrade fails to boot with Trusted Execution enabled in BIOS
Team,
Does anyone have any idea why ESXi 6.7 doesn't like Trusted Execution enabled in the BIOS? It worked fine in ESXi 6.5.
I'm running the following...
Dell PowerEdge T30
BIOS v1.0.12
Intel Xeon E3-1225 v5 CPU
Intel HD Graphics P530
64 GB DDR RAM
Thanks,
Brad
Where is ESXi Multi Monitors / Number of Displays option used?
When I edit the Video Card settings for a VM in ESXi I can see there is an option to specify "Number of Displays". However I can't see what this actually does? What is the purpose of this option and how should it be used?
I've tried the following clients:
Fusion 8.5
Workstation 12.5
VMRC
ESXi Web Client
Horizon View 7
Remote Desktop
And they all seem to do their own thing in terms of configuring the number of displays on the host and do not seem to honour this setting. Is that correct - and if so what is the purpose of this option??
Thanks
Need some help with setting up ESXi 6.5 3D accelerated VM
Hi All
First foray into ESXi here I'm combining a whole lot of PCs I have sitting around into VMs running on a secondhand HP proliant server I'm setting up as a home lab.
I've got ESXi 6.5 setup fine - many times over as I've tried every version since 5.0 trying to get a Hauppauge 2200 PCIe tuner card working in pass-through so I can virtualise my media server. I've have learnt a lot in the process - definitely way more complicated than I was expecting but after much trial and error and tweaking settings in passthru.map and vm config it's now running great on the latest ESXi.
My next challenge is getting some sort of GPU support in the VMs so I can virtualise the Windows PCs I have here. I've read a lot on this but a lot of the material is aimed at either corporate setups running GRID GPUs or gamers who game directly on the server. My server will be headless. I'm hoping someone can point me in the right direction and I've tried various things without much success. Here is what I'm trying to achieve:
- Virtualise win7 and win10 machines without losing 3d acceleration, aero etc
- I need decent 3D accelerated desktop performance, but not expecting to run high-end games or run CAD software although it would be nice if I could run some games that may need modern directx versions.
- It's a home lab so budget is limited. I can't afford GRID GPUs and enterprise ESXi licenses. I'm willing to buy some second-hand graphics hardware as needed.
- I was hoping to control the VMs using Fusion 8.5 or Workstation 12 as I already licenses for both these setup on a mac and windows laptop I won't be virtualising.
- I have no vsphere server, I'm connecting directly to the ESXi host
Based on that what would be the recommended approach for setting up 3d accelerated machines?
Here's what I've tried:
- I've installed an older nvidia consumer pcie card (GeForce 9400 GT) I had lying around in the host as a proof of concept to try out vDGA.
- I've set the onboard graphics in my HP server as primary and the nvidia card as secondary so I can allocate it out to the VM without losing my ESXi console, iLo etc
- I've set the card to passthrough, masked the hypervision in the VM settings, msiEnabled = FALSE and got the drivers loaded in the guest
- The card is detected fine in the guest, no BSOD. I can't use the card though without plugging a monitor into it. Once I do that, I can use it. However I only get 3D acceleration on the desktop that is output on the monitor attached to the card. The desktop shown on my vmware fusion/workstation session is treated as a monitor running on the svga adapter and is not accelerated.
- I've tried all sort of things - delaying VMWare tools startup, disabling the SVGA adapter, unistalling the SVGA driver in VMWare tools, etc I can't seem to get a VMWare session running on the nvidia display adapter. I'm not sure if it's something I'm doing wrong? I have a feeling the gamers are okay with this as they use the monitor(s) attached to their video card?
- I then tried disabling the secondary card in the server and using the nvidia card as the primary. I lose the ESXi console part way through booting the host when the card is reassigned to pass through. Within the VM I still end up with the vmware svga adapter. This time though the nvidia driver seems to be onto me and I get the code 43 error - it's somehow detecting the vmware driver when it wasn't before. Disabling this adapter gets the nvidia driver to load but my session still isn't accelerated it behaves more like a remote desktop session.
So one question I have at this point is whether running a VM using Workstation or Fusion to connect to an ESXi host will even give me a session capable of hardware acceleration? If not, what is the solution? If so, what am I doing wrong?
Sorry about the long post appreciate you taking the time to read it.