Skip to content

Posts tagged ‘esx’


ESX 4.1: Local users cannot login

If you regularly SSH into your ESX hosts, this may be old news to you. But if you’re like me and mostly manage your ESX hosts via vSphere Client, you might have a surprise waiting for you when you upgrade to ESX & ESXi 4.1. With the advent of ESX Active Directory integration, VMware kindly decided to impose some new changes and requirements for local user accounts. What does this mean to you?

For me, it meant that when I tried to SSH into my ESX host, I ran into “Access is denied.” And with only one non-root user account on the system, this meant no remote access (on the host itself). Root is restricted to interactive access, so that wasn’t any help. Thankfully the Dell Remote Access Card (DRAC) put me on the console, so to speak, and let me poke around as root.

The solution, though, came from a Google search, a somewhat unhelpful VMware KB article (1024235), and a little connecting of the dots. AD integration places a new dependency on the local “Administrators” role. If local user accounts aren’t in that role, they can’t get in.

Oddly enough, vSphere Client has to be targeted directly at the ESX host (not vCenter) to edit the role and local users. Looking while connected through vCenter won’t get you anywhere. So, here we go: Read moreRead more


VMFS out of heap memory

The default heap size for VMFS-3 is set to 16Mb. This allows for a maximum of 4Tb of open virtual disk capacity on a single ESX host.

In ESX 3.0, the value cannot be adjusted. VMware is considering a patch to correct the issue.

In ESX 3.5, the value can be adjusted:

  1. Log in to the VirtualCenter or the ESX host using the Virtual Infrastructure Client. If connecting to VirtualCenter, select the ESX host from the inventory.
  2. Select the Configuration tab.
  3. Select Advanced Settings.
  4. Select VMFS3.
  5. Update the field in VMFS3.MaxHeapSizeMB.

The maximum heap size is 128Mb. This allows a maximum of 32Tb of open storage.

Applies to: VMware ESX 3.x


Virtual Center 2.x incorrectly sizes disks during migration

VMs that were formerly RDMs (Raw Device Mappings) and which have had one or more disks grown via LUN migrations in EMC Navisphere (or similar functions in another vendor’s SAN tool) may fail to create the appropriately-sized disks on the target SAN during a storage migration. This is due to the fact that the RDM mapping file on the source SAN never updated to reflect the size of the previously grown LUN (via LUN migration). VMware Virtual Center uses that mapping file to create new VMDK files on the target.  Thus, if the mapping file is not updated to reflect the proper size, Virtual Center will create a smaller file on the target, possible resulting in loss of data or program integrity.

Example: server1 originally had a 30GB C:\ prior to a rebuild. When it was rebuilt, the same LUNs were used. However, due to a larger RAM allocation (i.e. 8GB instead of 4GB), the C:\ drive needed to be expanded. LUN migration in EMC Navisphere was used to accomplish this. However, the mapping file (the pointer .vmdk file) never changed to the new size. When the migration took place, Virtual Center only created a 30GB virtual disk on the target SAN. Windows booted thinking it had a 50GB disk (the expanded size). The result was that applications (i.e. SQL Server) and possibly other components failed to function after migration.

The solution is to delete and re-add the RDMs of any grown VMs before migration to ensure that the right size is used on the target.

Applies to: VMware ESX 3.x, Virtual Center 2.x