Ever since upgrading to esxi 6.0 U2 (from 5.5) we've had a problem with vmtools on our windows vms (2008R2 and 2012R2). The vmtools show as current but "not running". If I RDP or open a vsphere console to one of these vm's, the vmtools will start and show as running. However, after logging off of the vm, the vmtools will stop again (sometimes right away; other times it takes a while). I opened an SR with VMware support and they suggested upgrading tools from 10.0.6 to 10.0.9 in which I have done. The issue is still occurring on some virtual machines. I've checked power options within windows on the vm's to see if there was something making the vm sleep, but I don't see anything like that. Just wondering if anyone else has experienced this. We don't have this problem with any of our Linux vms.
vmtools current but not running
How do I manually add driver for USB attached network interface
Hello,
I have an ESXi v6.7 deployment on a Dell Optiplex 7040 hardware with one onboard NIC. I want to enable a 2nd NIC and have obtained a USB attached Ethernet adapter to that end. The device comes with a CD containing the needed driver files (with .c and .h extensions) with a makefile provided for compilation:
asix.c
asix.h
axusbnet.c
axusbnet.h
Makefile
readme
I am though, not certain on the system where to execute this so as to install the drivers and thereby the USB connected NIC. The instructions advise:
============================================================================
ASIX AX88178 USB 2.0 Gigabit Ethernet Network Adapter
ASIX AX88772 USB 2.0 Fast Ethernet Network Adapter
ASIX AX88772A USB 2.0 Fast Ethernet Network Adapter
ASIX AX88760 USB 2.0 MTT HUB and USB 2.0 to Fast Ethernet Combo Controller
ASIX AX88772B USB 2.0 Fast Ethernet Network Adapter
ASIX AX88772C USB 2.0 Fast Ethernet Network Adapter
Driver Compilation & Configuration on the Linux
============================================================================
This driver has been verified on Linux kernel 2.6.14 and later.
================
Prerequisites
================
Prepare to build the driver, you need the Linux kernel sources installed on the
build machine, and make sure that the version of the running kernel must match
the installed kernel sources. If you don't have the kernel sources, you can get
it from www.kernel.org or contact to your Linux distributor. If you don't know
how to do, please refer to KERNEL-HOWTO.
Note: Please make sure the kernel is built with one of the "Support for
Host-side, EHCI, OHCI, or UHCI" option support.
===========================
Conditional Compilation Flag
===========================
[AX_FORCE_BUFF_ALIGN]
Description:
There are alignment issues of USB buffer in some USB host controllers.
Turn on this flag if the implementation of your USB host controller
cannot handle non-double word aligned buffer.
When turn on this flag, driver will fixup egress packet aligned on double
word boundary before deliver to USB host controller.
Setting:
1 -> Enable TX buffers forced on double word alignment.
0 -> Disable TX buffers forced on double word alignment.
Default:
0
================
Getting Start
================
1. Extract the compressed driver source file to your template directory by the
following command:
[root@localhost template]# tar -xf DRIVER_SOURCE_PACKAGE.tar.bz2
2. Now, the driver source files should be extracted under the current directory.
Executing the following command to compile the driver:
[root@localhost template]# make
3. If the compilation is well, the asix.ko will be created under the current
directory.
4. If you want to use modprobe command to mount the driver, executing the
following command to install the driver into your Linux:
[root@localhost template]# make install
================
Usage
================
1. If you want to load the driver manually, go to the driver directory and
execute the following commands:
[root@localhost template]# insmod asix.ko
2. If you had installed the driver during driver compilation, then you can use
the following command to load the driver automatically.
[root@localhost anywhere]# modprobe asix
If you want to unload the driver, just executing the following command:
[root@localhost anywhere]# rmmod asix
================
Special define
================
There is a RX_SKB_COPY preprocessor define in asix.h can solve rx_throttle problem
in some version of 3.4 Linux kernel. Removing the comment before the define can enable
this feature.
Thanks in advance for assistance in this regard.
Trying to upgrade Dell server - need to remove VIB
I have a power Edge R520 server installed with Dell-ESXi-5.5U2-2068190-A00
I have uploaded the ESX 6.5U1 ISO image (not the custom Dell one)
Update manager returns:
The upgrade contains the following set of conflicting VIBs:
LSI_bootbank_scsi-mpt3sas_04.00.00.00.1vmw-1OEM.500.0.0.472560
LSI_bootbank_scsi-mpt3sas_04.00.00.00.1vmw-1OEM.500.0.0.472560
Remove the conflicting VIBs or use Image Builder to create a custom upgrade ISO image that contains the newer versions of the conflicting VIBs, and try to upgrade again.
After checking VMWare documentation I found the following article:
Scsi-mpt3sas This is a driver VIB provides ‘mpt3sas’ driver which enables support for AVAGO MPT Fusion based SAS3 (SAS 12.0 Gb/s) Controller(s). Dell don’t support this device and hence this driver can be safely removed.
I try to remove it but cannot find the correct VIB name
~ # esxcli software vib list|grep LSI
scsi-megaraid-perc9 6.901.55.00-1OEM.550.0.0.1331820 LSI VMwareCertified 2014-08-27
scsi-mpt2sas 18.00.00.00.1vmw-1OEM.550.0.0.1198610 LSI VMwareCertified 2014-08-27
scsi-mpt3sas 04.00.00.00.1vmw-1OEM.500.0.0.472560 LSI VMwareCertified 2014-08-27
~ # esxcli software vib remove --vibname scsi-mpt3sas
[NoMatchError]
No VIB matching VIB search specification 'scsi-mpt3sas'. Please refer to the log file for more details.
Inital network setup on standalone ESXi
Right now, we have a fresh install of ESXi 6.5 on a standalone server. It has a configured IP on our management network, connected by one NIC to the vSwitch. There are the default 2 port groups for management and VM Network. We want to spin up a VM but that VM is on another VLAN subnet and may have interfaces on possibly other VLAN subnets as well. What is the best approach to this? Do I establish a trunk to an external switch using another NIC, then connect that to a new vSwitch? The same vSwitch? How do I configure the VLAN? It's been a while since establishing new network connectivity, especially on a new and standalone host. Any help for best or common practice would be appreciated. Thanks.
PSOD on ESXI 6.0.0 [Releasebuild-3620759 x86_64] with Mem Allocation error
Hi,
Looking for help and guidance on a PSOD event that occurred on one of our ESXi host devices.
We're running VMware ESXi 6.0.0-3620759 on UCSC-C220-M4S (Intel Xeon CPU E5-2680 v3).
The host has three VMS running on it, mainly Cisco applications (ISE, etc).
Using 32 vCPU currently.
I tried to do some research on possible bugs or related threads with similar descriptions, but this is the only one that I could find with similar backtrace logs (the thread was left unanswered):
PSOD on ESXI 6.0.0 [Releasebuild-3620659 x86_64] with Mem Allocation error.
I'm not an expert at virtualization technologies, but I'm hoping to see if anybody has any ideas on this problem (sorry if resolution is bad, the other thread if it helps):
Thanks,
A normal poweroff cause 'Unexpected Power Lose Count' of SSD increse
Hi everyone:
My ESXi host have two SSDs which one was installed ESXi 6.5 U1 and another just stroe VM files.
My hardware like this:
Motherboard: SuperMicro X11SAE-M with Intel C236 chipset
CPU:Intel XEON E3-1268LV5
RAM:16GB ECC From Kingston
SSDs: Liteon T9 200G
Samsung SM863 480G
I found it would cause a 'Unexpected Power Lose Count' in S.M.A.R.T of the Samsung SSDs increse, but when I booted the Win PE and poweroff, it won't happen. So what happend when ESXi poweroff the system? How to fix it please?
All the hardwares include HBA controller and Samsung SSD with this FW. were compatible with ESXi 6.5 u1, so is that a bug?
vSphere 6.0.0 not seeing 82571EB Gigabit Ethernet Controller
host = ESXi v6.0 build: 3380124
vSphere = 6.0.0 build: 3016447
0000:00:19.0 Network controller: Intel Corporation 82579V Gigabit Network Connection [vmnic0]
0000:03:00.0 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic1]
0000:03:00.1 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic2]
i added the 82571EB dual nic card to the host mobo. lspci command on ESXi shows it recognizes the card but under vSphere it only displays the management nic, vmnic0.
since ESXi recognizes the 82571EB card then i assume it has the correct drivers?
how do i get vSphere to see the 82571EB dual nic card so i can use it for my VM's
[!02.10.2016:update]
due to many host reboots while attempting to get a guest to use the passthru'd GPU geforce 770 all of a sudden i see the 82571EB showing up in Hardware->Network Adapters. i created "vSwitch1/VM Network 2" and added the dual ports to it. looks like i can now add them to my VMs.
ESXi software components show the nic driver as "e1000e driver 3.2.2.1-NAPI with firmware 0.13-4 5.11-2" which should work for the 82571EB. for whatever reason seems to be working now.
Used space is much bigger than real used space
Hi All,
I have a VM, which is used about 400GB but in the datastore, it shows the VM used 1.98 TB of datastore. Now I want to run consolidation for this VM but there is not enough space to run. I have tried to delete all snapshots but there is no luck because, there's nothing in Snapshot Manager, just only "You are here". Via console of ESXi, it is shown as below:
How can I solve this issue?
Lots of "Deassert" messages about drives
I've got a whole lot of "Deassert" messages on my new install of ESXi:
SAS A 0: Config Error - Deassert | Green | 0 | Cable/Interconnect | Unknown |
Disk Drive Bay 1 Cable SAS B 0: Config Error - Deassert | Green | 0 | Cable/Interconnect | Unknown |
Disk Drive Bay 1 Drive 0: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 0: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 1: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 2: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 3: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 4: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 5: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 6: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: Drive Fault - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: In Critical Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: In Failed Array - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: Parity Check In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: Predictive Failure - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: Rebuild Aborted - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 1 Drive 7: Rebuild In Progress - Deassert | Green | 0 | Storage | Unknown |
Disk Drive Bay 3 ROMB Battery 0: Failed - Deassert | Green | 0 | Battery | Unknown |
Disk Drive Bay 3 ROMB Battery 0: Low - Deassert |
What's really odd about these messages is I only have FOUR drives. I'm seeing messages about EIGHT drives here and they seem to just repeat through these various messages.
Are these real or could I be mising an update?
I installed a customized ESXi for the Dell R710 from the Dell website.
Thanks in advance
ESXI 6.5 data recovery
Dear's
we have ESXI 6.5 and VM 2008R2 working as file server , a snapshot was taken on last March and work proceed till october , for some unknown reason current stat was lost and it revert to last snapshot that taken at March so all of data on D drive return to March dated files and any other modifications was done at this period was lost , what kind of recovery should we make as there is no VMDK include this lost dates ,
scratch partition and syslog
Hi,
I see that the ESXi scratch partition is by default set to /tmp/scratch, where is this stored?
Is this local on the SD card used for the ESXi install or on ramdisk (space allocated in memory for storing these logs)?
Strangely when I enter df -h on an ESXi host, it does not list the local drive/partitions (SD Card).
For syslog, if I setup a remote host to send logs (syslog.gobal.logHost), do I also need to specify a datastore (Syslog.global.logDir) or can I set this to null?
Thanks
Web Client Error vmware-csd webpage cannot be displayed IE 11 and Windows 7
Error trying to connect to the Vcenter web client using a non domain computer running Windows 7, IE 11, and flash 21.
Error appears after going to the https://serverFQDN/vsphere-client/
The site redirects me to VMware-csd://csd/?sessionId=............
Error: The webpage cannot be displayed
most likely cause: Some content or files on this website require a program that you don't have installed.
I have a new installation of VCenter Server 6, all on one box using the default Postgress SQL, OS 2012 R2
I have set up the domain CA to see the vcenter server as a subordinate CA and have added the CA to my non domain box as a trusted Publisher, ect.
So there are no certificate errors when accessing the website.
The website comes up fine on the server aside from this annoying popup for the login (No apps are installed to open this type of link (VMware-csd)). I click on the login and it logs in fine.
The website comes up fine on a domain windows 7 with IE 9 even though 9 is not supported by VMware.
I turned the Windows firewall off for all settings on the server.
I tried turning on compatibility view settings, adding the site to the trusted sites, no luck.
So other than adobe flash, are their any other plugins/software requirements for the web client that I may be missing?
Any other suggestions/pointers?
Thanks!
Laura
My ESXi server completely locks up at times
Hello everyone! I came across your forum trying to find a solution to a problem I am having with my server and I thought I'd make a post and see if anyone here has any ideas.
Here is my setup:
Motherboard - ASRock 970 Extreme4
CPU - AMD FX-8320
Memory - 32GB (G.SKill Ripjaws X Series 4 x 8GB DDR3 1600 PC3 12800 (F3-1600C9Q-32GXM)
Drive Controller - IBM M1015 flashed to LSI SAS2008
NIC - Intel PRO/1000 Dual Port Server Adapter (disabled the onboard Realtek NIC)
On the M1015, I'm running 3 4TB Seagate NAS ST4000VN000 drives, 2 3TB Seagate drives (not sure the version) and 2 2TB Western Digital Green Drives. I have a couple 256GB SSDs and a 750GB WD plugged into the controllers on the motherboard.
Here's how the VM's breakout:
WHS2011 - 8GB RAM, 4 cores (1 socket), passed through the M1015 (with its 8 disks) and it has two virtual disks (1 on an SSD for the OS, one on the 750GB for torrents). I am running StableBit DrivePool. I was running FlexRAID (tried SnapRAID too), but I'll get to that later
Windows 7 VM - hosts an Emby server and I use this for playing around - 8GB Ram, 4 cores (1 socket), 1 Virtual disk on an SSD
Windows 7VM 2 - hosts a Plex Server - 8GB Ram, 4 cores (1 socket), 1 Virtual disk on an SSD
Windows Xp VM - 1GB Ram, 1 cores, 1 virtual disk on SSD (not usually running)
Windows Server 2012 R2 BSOD 0x0000013c
Hello,
One of my Windows 2012 R2 RDS servers crashes with a BSOD INVALID_IO_BOOST_STATE (code: 0x0000013c). Some searching told me that it should be a driver with a memory leak.
Guest: Windows Server 2012 R2 up to date, hardware version 11, LSI Logic Storage Controller, E1000E NIC, Host: Dell PowerEdge R720, ESXi 6.0.0
This is the only server with this issue but this is our sole RDS server in our VMWare environment. Do you have any suggestions how to fix?
When I run the analysis on the minidump, it says the issue is with msoia.exe which is the Microsoft Office Telemetry Agent but when I submitted a Microsoft Office 365 ticket, they referred me to Windows Server (commercial) support. They referred me back to Microsoft Office 365. I've attached my minidump. Should I post my full dump? It's more than a gigabyte in size so I'm holding off unless it's really neaded.
Thanks in advance!
Regards,
Alan Chan
vSphere Distributed Switch Migration
Hi Folks,
Is anyone worked on migrating "vSphere Distributed Switch" from vCenter 5.5 to vCenter 6.x....!
Note: I am using vCenter server on Windows not VCSA.
RDM for 3TB hard drive in ESXi 6.5
I recently upgraded hardware from plain box to Dell T600 RDM's.
So I have no option but to reconfigure this RDM's because device paths are different in new hardware but it's strange that I am not able to add RDM with disk size larger than 2TB.
[:~] vmkfstools -r /vmfs/devices/disks/naa.5000c5007c63caf9 /vmfs/volumes/folder/rdm-filename.vmdk
Failed to create virtual disk: The destination file system does not support large files (12).
I am not VMWare expert and using this as home lab and remember it worked using either of option -r or -z
vmkfstools -r /vmfs/devices/disks/t10.ATA_____ST3000DM0012D1ER166__________________________________W500AZ0J /vmfs/volumes/folder/rdm-filename.vmdk
I have another 1 TB hard drive but it worked with -z as it's less than 2TB.
I am using this RDM virtual disk for FREENAS and attached screen shots for what drive looks like in ESXi devices.
I hope it shouldn't be a problem adding existing FREENAS drive as RDM?
Any help will be much appreciated and thanks a lot in advance...
NVIDIA GRID vGPU - Rocky start, any good experiences/help?
Greetings,
We are attempting to configure dual Tesla M10 GPU cards in our VxRail E570F servers (Dell 14G 2U box) to be used for a Citrix Xendesktop VDI cluster. We are finding that we cannot get ESXi to properly allocate memory to these cards with the default MMIO setting = 56T. Changing the setting to 12T or 512G per this Dell article causes the server to purple screen a few minutes after full boot.
This is using the latest available VIB from NVIDIA on ESXi 6.5u3c.
NVIDIA support portal is down so I'm attempting to get knowledge elsewhere on this tech.
Thanks for any advice.
Problem connecting to host
Nên chọn Mua bán Nhà mặt phố chính chủ hay căn hộ chung cư để đầu tư trong ngắn hạn và dài hạn
Nhà phố và chung cư là hai trong số rất nhiều loại hình bất động sản được bán chạy nhất hiện nay trên thị trường ( ngoài ra còn có thể kể tới: căn hộ khách sạn condotel, biệt thự biển nghỉ dưỡng Vinpearl, đất nền tại các khu đô thị ở tỉnh ,… ). Mỗi một loại hình, đều có những ưu điểm và nhược điểm riêng.
Những nhà đầu tư hiện nay đều khá phân vân không biết nên chọn loại hình nào để đầu tư thu lời. Với sự tư vấn của chúng tôi, hi vọng bạn sẽ có cái nhìn toàn diện hơn về hai loại hình bất động sản này, từ đó đưa ra quyết định chính xác.
Đầu tiên cần phải quy ước rằng, chúng ta sẽ bỏ qua yếu tố về mức đầu tư cao hay thấp, bởi vì ai cũng biết rằng giá trị của chung cư là thấp hơn nhiều so với nhà phố Hà nội hiện nay. Nên chúng ta sẽ giả sử 1 nhà đầu tư có khoảng 10 – 13 tỷ đồng và mong muốn mua bất động sản.
Trước tiên chúng ta cùng phân tích về Nhà mặt phố có gì đặc biệt
- Xét về mặt Ưu điểm hiện nay:
+ Là căn hộ đa năng: Sử dụng loại hình bất động sản này, bạn có thể dùng vào nhiều mục đích khác nhau. Có thể sử dụng để ở, kết hợp thêm kinh doanh hay thuê mặt bằng…. Vừa có được nơi an cư cho cả gia đình, bạn vừa có thể kiếm thêm được nguồn thu từ công việc kinh doanh. Đây cũng chính là đặc trưng của Nhà mặt đường kinh doanh tại trung tâm Hà Nội Với việc bạn chỉ phải bỏ tiền ra mua 1 sản phẩm mà sử dụng được tới3 - 4 công năng thì rõ ràng quá là tuyệt vởi rồi, còn tốt hơn nhiều so với đem tiền gửi tiết kiệm+ Giá trị gia tăng hàng năm lớn: Được dự đoán trong tương lai, khi các khu đô thị tại Việt Nam phát triển, Shophouse sẽ tăng giá trị lên khoàng 12% mỗi năm. Đây sẽ là đối tượng được những nhà đầu tư đặc biệt chú ý.Đặc biệt trong 3 năm trở lại đây thì biên độ tăng giá đất còn cao hơn con số 12% gấp nhiều lần, 1 miếng đất bạn mua năm 2017 thì năm 2018 bán đi đã lời tới 30% rồi.+ Vị trí đẹp: Nằm tại vị trí đẹp nhất trong phân khu thấp tầng tại các khu đô thị lớn hiện nay, Nhà mặt phố với tầm view đẹp, nổi bật nên đảm bảo cho công việc kinh doanh thuận lợi, khách hàng sẽ chú ý đến.+ Mức độ an toàn khi sống tại nhà thấp tầng là gần như tuyệt đối, nếu có xảy ra cháy nổ hỏa hoạn gì thì người dân sống trong nhà cũng có thể nhanh chóng thoát ra bên ngoài chứ không gặp khó khăn khi đang sống tại chung cư cao tầng.
- Nhược điểm của Nhà mặt phố là gì????
+ Mức giá bán của đất thổ cư khá cao. Thông thường, những khu đô thị lớn tại Hà Nội, các căn loại này thường có giá khoảng 45 – 60 triệu/1m2. Tính tổng cả căn, trung bình khoảng 10 tỷ đồng. Vậy nên, nếu nhà đầu tư nào muốn đầu tư vào bất động sản thổ cư, hãy nên cân nhắc về khoản tài chính.Vì thế mà nó sẽ hạn chế một số lượng rất lớn khách hàng có mức thu nhập khá trở xuống, sẽ rất khó để có thể mua được nhà loại này.Tiếp theo , chúng ta cũng đi vào phân tích kỹ lưỡng loại hình Căn hộ chung cư
- Đánh giá về yếu tố: Ưu điểm
+ Giá bán thấp, chỉ khoảng 3 tỷ đồng là bạn đã có thể sở hữu căn hộ rộng 3 phòng ngủ tại trung tâm Hà Nội. Thậm chí ở quận hà đông thì giá chung cư cũng chỉ khoảng 800 triệu đồng cho 1 căn hộ 40m2 tại MIpec City view.+ Yên tĩnh, thoáng mát vì nằm trên tòa nhà cao tầng. Từ trên cao ngắm xuống toàn thành phố vô cùng đẹp.+ So với Nhà mặt phố, các căn hộ chung cư sẽ có nhiều lợi thế về đối tượng khách hàng hơn.
- Nhược điểm
+ Không dùng để kinh doanh và chỉ sử dụng để ở và cho thuê lại mặt bằng
+ Giá trị của các căn hộ chung cư không tăng theo thời gian.
+ Vị trí không đẹp nên hơi khó để tiếp cận được với khách hàng.
Tóm lại, nếu bạn chỉ có khả năng đầu tư vào các căn hộ chung cư, mức giá vừa phải, hãy nên tìm hiểu kỹ thông tin và chỉ thực hiện được việc cho thuê lại mặt bằng. Còn với Nhà mặt phố giá rẻ Hà nội , bạn hoàn toàn có thế làm nhiều công việc cũng như sử dụng nó vào nhiều mục đích khác nhau như chúng tôi phân tích bên trên. Hi vọng với thông tin này, bạn sẽ quyết định được nên đầu tư vào Shophouse hay các căn hộ chung cư nhé.
Can't grow VMFS Datastore
Hello
I wanted to expand the Datastore because it's running out of free space and the raid1 where it resides had about 50 GB unasigned (I let those 50 GB unused when I initially created the VMFS)
Since I couldn't do it on the GUI (ncrease datastore capacity greyed out), I followed this instructions to do it on the CLI: https://michlstechblog.info/blog/esxi-expand-datastore-from-command-line/
I could expand the partition and all went well until the part of growing the VMFS.
Initial part:
[root@esx:~] partedUtil get /vmfs/devices/disks/eui.b0269f4100d00000
30377 255 63 488017920
1 128 364644400 0 0
[root@esx:~] partedUtil getUsableSectors /vmfs/devices/disks/eui.b0269f4100d00000
34 488017886
[root@esx:~] partedUtil resize /vmfs/devices/disks/eui.b0269f4100d00000 1 128 488017886
[root@esxi:~] vmkfstools -V
If I do as described, using the partition indicator ":1", I get a "not found" message.
I if remove the ":1" from the second argument (destination?), I get a "Failed to get info from head device path"
[root@esx:/dev/disks] vmkfstools --growfs /vmfs/devices/disks/eui.b0269f4100d00000:1 /vmfs/devices/disks/eui.b0269f4100d00000:1
Not found
Error: No such file or directory
[root@esx:/dev/disks] vmkfstools --growfs /vmfs/devices/disks/eui.b0269f4100d00000:1 /vmfs/devices/disks/eui.b0269f4100d00000
Failed to get info from head device path /dev/disks/eui.b0269f4100d00000.
Error: No such file or directory
I'm stucked here and don't know what to do
The partition size is 232 GB but the Datastore is only 173 GB,,,
Here some screenshots
Could you help me???
Thanks in advance.
Windows Failover Cluster Manager
hi,
Can you use windows failover cluster manager with esxi hosts. If I connect my 8 Esxi hosts to the "AD DC" then try add them to the windows failover cluster manager does it work?
is there any particular configuration that needs to be set up. Im guessing the user accounts need to be all exactly the same.
matt