Tag: ESXi

ESXi NC551m stops working after firmware update

In an ideal world, management would provide unlimited funding to upgrade hardware continuously! We all know that’s not going to happen! Sometimes it is necessary to prolong the lifespan of servers as long as possible, particularly when they are extremely well-provisioned devices, even by today’s standards!

Such is the case with our HP BL460 G7 Blades. They are each equipped with a dual-port 10Gb onboard NIC adapter (Emulex HP NC553i) and a dual-port Mezzanine NIC adapter (Emulex HP NC551m), rendering a total of four 10Gb ports.

Recently, after running HP Service Pack for Proliant (SPP), we lost network connectivity to the Emulex HP NC551m adapter. It wasn’t simply that no network traffic was being passed, but rather the entire adapter disappeared from the configuration in ESXi 6, and the adapters were not visible using the SSH CLI command: esxcli network nic list It’s as if the NC551m adapter simply wasn’t there! Continue reading

VCSA and ESXi password security

vcsa password security

I recently went looking for information on password security for the VCSA 6.0 & 6.5 and ESXi 6.0 & 6.5. Most specifically, I was interest in the number of passwords remembered, so I could define that in documentation for a client.

Try as I might, I couldn’t find documentation for VCSA number of passwords remembered or how to configure it anywhere! Continue reading

HPE Custom Image for ESXi 6.5U1 has been withdrawn due to purple-screen issues

HPE has quietly withdrawn the HPE Custom Image for ESXi 6.5U1 July 2017 due to purple-screen issues being experienced on a number of current VMware-supported servers (http://vmware.com/go/hcl)!

The particular issue purple screen we saw when deploying this ISO against a HP BL460 G7 was:

#PF Exception 14 in world 6824:sfcb-smx IP 0x1 addr 0x1

 

Continue reading

Patch your ESXi Hosts from the command line easily and quickly

In many situations it is desirable to patch your ESXi host(s) prior to being able to install or use VMware vSphere® Update Manager™.

UPDATED 4/18/2016: HP has a new URL for HP Customized VMware ISO’s and VIB’s

For example:

  • Prior to installing vCenter in a new cluster
  • Standalone ESXi installations without a vCenter Server
  • Hardware replacement where you have ESXi Configurations backed-up with vicfg-cfgbackup.pl, but the rest of the hosts in the cluster are running a higher build number than the latest ISO available
  • It is just convenient on a new ESXi host, when internet connectivity is available!
  • Non-Windows environments that do not to intend to create a Windows instance just for patching ESXi

Continue reading

Timekeeping on ESXi

Timekeeping on ESXi Hosts is a particularly important, yet often overlooked or misunderstood topic among vSphere Administrators.

I recall a recent situation where I created an anti-affinity DRS rule (separate virtual machines) for a customer’s domain controllers. Although ESXi time was correctly configured, the firewall had been recently changed and no longer allowed NTP. As it happened, the entire domain was running fine and time was correct before the anti-affinity rule took effect. Unfortunately, as soon as the DC migrated (based on the rule I created), its time was synchronized with the ESXi host it was moved to, which was approximately 7 minutes slow! The net result was users immediately experienced log-in issues.

Unfortunately, when you configure time on your ESXi Host, there is no affirmative confirmation that the NTP servers you specified are reachable or valid! It doesn’t matter if you add correct NTP servers, or completely bogus addresses to the Time Configuration; the result is that the ESXi will report that the NTP client is running and seemingly in good health! Moreover, there is no warning or alarm when NTP cannot sync with the specified server.

Let’s create an example where we add three bogus NTP servers:

In this example, you can see the three bogus NTP servers, yet the vSphere Client reports that the NTP Client is running and there were no errors!

The only way to tell if your NTP servers are valid and/or functioning is to access the shell of your ESXi host (SSH or Console) and run the command: ntpq –p 127.0.0.1

image004

The result from ntpq –p demonstrates that *.mary.little.lamb is not a NTP server.

Now, let’s try using three valid NTP servers:

In this example, I have used us.pool.ntp.org to point to three NTP valid servers outside my network and the result (as seen from the vSphere Client) is exactly the same as when we used three bogus servers!

image008

The result from ntpq –p demonstrate that there are three valid NTP servers resolvable by DNS (we used pool.ntp.org), but that the ESXi host has not been able to poll them. This is what you see when the firewall is blocking traffic on port 123!

Additionally, when firewall rules change, preventing access to NTP, the ‘when’ column will show a value (sometimes in days!) much larger than the poll interval!

When an ESXi host is correctly configured with valid NTP servers and it is actually getting time from those servers, the result form ntpq –p will look like this:

image010

Here you see the following values:

remote Hostname or IP of the NTP server this ESXi host is actually using,
rfid Identification of the time stream.

  • INIT means the ESXi host has not yet received a response
  • CDMA means that the time stream is coming from a cellular network.
st Stratum
t tcp or udp
when last time (in seconds) the NTP was successfully queried. This is the important value: when the ‘when’ value is larger than the “poll” field, NTP is not working!
poll poll interval (in seconds)
reach An 8-bit shift register in octal (base 8), with each bit representing success (1) or failure (0) in contacting the configured NTP server. A value of 377 is ideal, representing success in the last 8 attempts to query the NTP server.
delay Round trip (in milliseconds) to the NTP Server
offset Difference (in milliseconds) in the actual time on the ESXi host and the reported time from the NTP server.
jitter the observed variance in the observed response from the NTP server. Lower values are better.

The NIST publishes a list of valid NTP IP addresses and Hostnames, but I prefer to use pool.ntp.org in all situations where the ESXi Host can be permitted access to a NTP server on port 123. The advantage to pool.ntp.org is that it changes dynamically with availability and usability of NTP servers. Theoretically, pool.ntp.org is a set-and-forget kind of thing!

ESXi Time Best Practices

Do not use a VM (such as a Domain Controller) that could potentially be hosted by this ESXi as a time-source.

Use only Stratum 1 or Stratum 2 NTP Servers

Verify NTP Functionality with: ntpq –p 127.0.0.1

VMs which are already timeservers (such as Domain Controllers) should use either native time services such as w32time or VMware Tools time synchronization, not both! See: VMware KB 1318

iSCSI with Jumbo Frames and Port Binding

10Gb iSCSI

Immediately after installing an ESXi Server, you may or may not have any storage at all. Most ESXi servers today are diskless, with the ESXi installation living on some sort of flash-based storage. In this case a fresh installation of ESXi will present with no persistent storage whatsoever, as you can see in the example below:

DIskless ESXi Installation
DIskless ESXi Installation

Some servers, however, still have traditional disks as in the screenshot below where clearly see the HP Serial Attached SCSI Disk, A.K.A. “Local Storage” or “Directly Attached Storage,” but no other storage is listed.

ESXi Installed with disks

In either case, it should be noted that it is a Best Practice, when installing ESXi to disk or media, to use a RAID 1 configuration for the location of the ESXi installation.

iSCSI Storage Network Examples

Our first task will be creating a network for iSCSI storage traffic. This should be a completely separate network from Production or Management networks, but how you create the separation is entirely up to you.

Most VMware admins will prefer a physically separate network, but in the days of 10Gb NICs and a relatively smaller number of interfaces[1], VLAN separation will work as well.

Example of a valid iSCSI configuration using VLANs to separate networks
Example of a valid iSCSI configuration using VLANs to separate networks

 

Example of a valid iSCSI configuration on dedicated NICs
Example of a valid iSCSI configuration on dedicated NICs

Configuring an iSCSI Network on dedicated NICs

Creating a network for iSCSI Storage

Let’s begin by adding an entirely new vSwitch for our iSCSI Network. Click on: Add Networking in the upper-right corner of the screen

Select: VMkernel as the network type

Choose to create a vSphere Standard switch, using all of the interfaces which are connected to your iSCSI Storage network. In this example, vmnic4 and vmnic5 are connected to the iSCSI Storage network

For the Network Label of your first VMkernel connection, choose a name that can be remembered sequentially. I always create my VMkernel connections for iSCSI in the following order:

  • VMkernel-iSCSI01
  • VMkernel-iSCSI02
  • VMkernel-iSCSI03
  • VMkernel-iSCSI04

If I have 2 physical uplinks (NICs), I will create 2 VMkernel connections for iSCSI. If I have 4 uplinks, I will create 4 VMkernel connections for iSCSI. Following this standard for your iSCSI configuration will conform with VMware requirements for Port Binding[2] and assist you in establishing the order in which you bind the VMkernel connections.

Set a VLAN ID, if that is appropriate for your environment

Now choose an IP and Subnet Mask that will be unique to your ESXi host, on the iSCSI network

iSCSI IP Plan

I like to set-up my iSCSI networks with an orderly IP schema. If you were to use a bunch of sequential IP addresses for VMkernel connections, you would leave no room for orderly expansion.

In order to allow for the orderly expansion of either the number of ESXi Hosts and/or the number of iSCSI VMkernel connections, I choose to increment my IP addresses by a given amount. For example, to anticipate a maximum of 20 ESXi hosts in a given environment, I would increment all of my VMkernel IP address by 20 like this:

VMkernel-iSCSI01 VMkernel-iSCSI02 VMkernel-ISCSI03
ESXi #1 10.0.0.101 10.0.0.121 10.0.0.141
ESXi #2 10.0.0.102 10.0.0.122 10.0.0.132
ESXi #3 10.0.0.103 10.0.0.123 10.0.0.133

Click on: Finish

After you click: Finish, you will see the new vSwitch (in this case, vSwitch 1)

On vSwitch1, click: Properties (be careful, there are separate “Properties” dialogs for each vSwitch and the overall Networking as well!

Click: Add

Choose: VMkernel

For the network label, choose a name that follows (sequentially) the VMkernel connection you created earlier.

Set a VLAN ID, if that is appropriate for your environment

Set an IP that follows the convention you established earlier. In my case, I am going to increment each VMkernel by 20.

Click: Finish

Repeat the previous steps for any additional iSCSI VMkernel connections you may be creating. You may only bind one iSCSI VMkernel per available uplink (vmnic)

You will now find yourself on the Properties dialog for the vSwitch you created.

Highlight the vSwitch itself, and click: Edit

If you have chosen to use Jumbo Frames, set the MTU to 9000

Jumbo Frames MUST be configured on the vSwitch prior to setting the VMkernel MTU above 1500

Click: OK

Now select the first (lowest numbered) iSCSI VMkernel and click: Edit

If you have chosen to use Jumbo Frames, set the MTU to 9000 and then go to the NIC Teaming Tab

Our goal in this dialog is to remove all aspects of load-balancing and failover from the vSwitch in order to enable Port Binding. Port Binding will allow the vSphere Path Selection Policy (PSP) to more effectively balance iSCSI loads and implement failover in the event it is required.

In order implement Port Binding, we must leave only one active NIC

VMkernel network adapter

Choose: Override switch failover order

Since this is the lowest-numbered iSCSI VMkernel, we are going to give it the lowest-numbered vmnic, in this case vmnic4.

Highlight all other vmnic’s and click the Move Down button until they are all listed in Unused Adapters. Don’t be tempted to leave any NIC’s in Standby, it will not work, per VMware policy!

Click: OK

Now choose the next iSCSI VMkernel and choose: Edit

If you have chosen to use Jumbo Frames, set the MTU to 9000 and then go to the NIC Teaming Tab

Since this is the next iSCSI VMkernel, we are going to give it the next vmnic, in this case vmnic5.

Highlight all other vmnic’s and click the Move Down button until they are all listed in Unused Adapters. Don’t be tempted to leave any NIC’s in Standby, it will not work, per VMware policy!

Click: OK

You may (it is a good idea to) check your settings by highlighting the vSwitch and each VMkernel in order and viewing the settings in the right-side column

Repeat the previous steps for any additional iSCSI VMkernel Connections you may have created and then click: close

This is what the finished Standard vSwitch networking configuration should look like

iSCSI Software Adapter

Choose: Storage Adapters and then click: Add

Choose: Add Software iSCSI Adapter

Click: OK

Right-click on the iSCSI Software Adapter and choose: Properties

Select the tab: Network Configuration

Click: Add

Choose one of the iSCSI VMkernel connections from the list and click: OK

Now click: Add

Choose the other iSCSI VMkernel connection and click: OK

Repeat the previous process until all VMkernel iSCSI Connections are bound. DO NOT add Management Network connections, if they are available

Choose the tab: Dynamic Discovery

Click: Add

Enter the discovery IP of your SAN.

Enter just one address, one time.

Click: OK

The address will appear after some seconds.

Click the Static Discovery tab and take note of how many paths, or targets your SAN presents (the more , the better!)

Click: Close

Click: OK

In a few seconds, you should see a listing of all of the devices (LUN’s) available on your SAN

Creating a VMFS 5 Volume

Click on the option: Storage (middle column, in blue) and then click: Add Storage

Choose: Disk/LUN and then: Next

Choose from the available devices (LUN’s) and click: Next.

Click: Next

Name your Datastore and click: Next

Choose: Maximum available space and click: Next

In reality, there is very little reason for choosing anything other than “Maximum available space”

Click: Next

Click: Finish

And your new VMFS 5 volume will be created!

  1. vSphere 6 Configuration Maximums
  2. Multipathing Configuration for Software iSCSI Using Port Binding

 

Install ESXi 6 to a physical server with IPMI

ESXi 6 on a HP Blade Server with iLO

We are going to install ESXi 6 on a physical server using HP’s IPMI interface known as iLO to perform the install. iLO is considered best-in-class for IPMI consoles, but still can take some getting used to. IPMI out-of-band interfaces collectively have the advantage of allowing users to:

  • Power servers on and off
  • Connect to ISO and FLP media
  • Input commands and view the console interface, including blue, purple and red screens that would not be visible with an in-band console

image002

First we are going to choose Image (by the picture of the CD/DVD)

image004

Then connect to the HP customized ESXi image that we just downloaded. Always use the vendor customized ESXi image for physical installs, when one is available

image006

Now click on the power icon and choose: Momentary Press

iLO Momentary Press

Wait a good long time to even see this

image010

And another good long time before the CD/DVD starts to load. When installing ESXi 6, the DVD will load to RAM (which is what you see happening below) and then the hypervisor will start.

image012

When the hypervisor has started, the screen will become yellow and grey, like below. The process speeds up from here.

image014

[Enter]

image016

[F11]

image018

Wait, just a few seconds (usually).

image020

[Enter]

image022

Select your choice here. The default is “Upgrade ESXi, preserve…” but we want a fresh install, so we chose “overwrite” [Enter]

image024

Choose your keyboard [Enter]

image026

Set a password [Enter]

image028

This next step may actually take a few minutes

image030

[F11]

image032

Wait, but since the binaries are all loaded by this point, this goes quickly.

image034

Disconnect the ISO from IPMI and then press [Enter]

image036

ESXi 6 on a HP Blade Server with iLO

Initial Configuration of an ESXi Host with the vSphere Client

There are certain basic settings you will want/need to configure before your ESXi host is suitable for use in production, or even in a lab environment. At the very least, you will need to give your ESXi a hostname and IP address.

Now, I could press [F2] here and configure my ESXi host using the Direct Console User Interface (DCUI), but the DCUI provides a limited set of options, and using an IPMI interface such as iLO (even though iLO is one of the best of its kind), is not always a user-friendly procedure. Besides, we covered using the DCUI in: http://www.johnborhek.com/vmware-vsphere/building-a-vsphere-home-or-learning-lab-2/

Instead, I will show you how to make these initial configurations using the VMware vSphere Client for Windows (sometimes called the vSphere Desktop Client or the vSphere C# Client), which is the only viable client for a standalone host.

Open the vSphere Client for Windows and enter the DHCO IP address you saw on the previous screen. You will use the User name: root and the password you assigned during the install.

Just click Ignore here. Installing this certificate would be useless, as we are going to change the IP.

You may have to click Home to see this screen

Now choose the tab: Configuration and choose the option: Networking

Click on Properties of vSwitch0. Be careful as there are two “Properties” links on this screen. You want the one right by the vSwitch

Highlight the: Management Network and choose: Edit

We probably won’t need to change any of the settings here.

Choose the tab: IP Settings

Now select: Use the following IP settings and don’t forget to click: No IPv6 Settings

Now remember, as soon as you apply this, your client session will become invalid because the IP is now different.

Enter the new IP you assigned, along with the username: root and you password

vSphere Client for Windows

Now is the time to “Install this certificate….” as well as: Ignore

Click on the tab: Configuration

Choose the option: DNS and routing and then: Properties

This is (probably) not the correct information, as it is supplied by DHCP.

Enter the correct hostname and domain, as well as the search domains (“Look for hosts in the following domains”)

You may see this if you left IPv6 enabled.

You are now finished with initial configuration of your ESXi Host and may proceed to set up storage, networking and everything else.