Thursday, July 7, 2016

Step-by-Step configuration LOG server-client
Sample Exam question:- You are a System administrator. Using Log files very easy to monitor the system. Now there are 40 servers running as Mail, Web, Proxy, DNS services etc. Your task is to centralize the logs from all servers into on LOG Server. How will you configure the LOG Server to accept logs from remote host ?
Answer with Explanation
An important part of maintaining a secure system is keeping track of the activities that take place on the system. If you know what usually happens, such as understanding when users log into your system, you can use log files to spot unusual activity. You can configure what syslogd records through the /etc/syslog.conf configuration file.
The syslogd daemon manages all the logs on your system and coordinates with any of the logging operations of other systems on your network. Configuration information for syslogd is held in the /etc/syslog.conf file, which contains the names and locations for your system log files.
By Default system accept the logs only generated from local host. In this example we will configure a log server and will accept logs from client side.
For this example we are using two systems one linux server one linux clients . To complete these per quest of log server Follow this link
A linux server with ip address 192.168.0.254 and hostname Server
A linux client with ip address 192.168.0.1 and hostname Client1
Updated /etc/hosts file on both linux system
Running portmap and xinetd services
Firewall should be off on server
We have configured all these steps in our pervious article.
We suggest you to review that article before start configuration of log server. Once you have completed the necessary steps follow this guide.
Check syslog, portmap, xinetd service in system service it should be on

#setup
Select System service from list
[*]portmap
[*]xinetd
[*]syslog
Now restart xinetd and portmap service

To keep on these services after reboot on then via chkconfig command

After reboot verify their status. It must be in running condition

Now open the /etc/sysconfig/syslog file

and locate SYSLOGD_OPTIONS tag

add -r option in this tag to accepts logs from clients

-m 0 disables 'MARK' messages.
-r enables logging from remote machines
-x disables DNS lookups on messages recieved with -r
After saving file restart service with service syslog restart command

On Linux client
ping from log server and open /etc/syslog.conf file

Now go to the end of file and do entry for serve as user.* @ [ server IP] as shown in image

After saving file restart service with service syslog restart command

Now restart the client so it can send log entry to server. ( Note that these logs will generate when client boot, so do it restart not shutdown)

Check clients log on Log server
To check the message of client on server open

In the end of this file you can check the log from clients

by
cnuvasan@gmail.com
*******************************ALL THE BEST************************************

Thursday, June 30, 2016

How to configure failover and high availability network bonding on Linux

This tutorial explains how to configure network bonding on Linux server. Before I start, let me explain what network bonding is and what it does. In a Windows environment, network bonding is called network teaming, this is a feature that helps any server architecture to provide high availability and failover in scenarios were one of the main ethernet cable has a malfunction or is misconfigured.
Normally, it is a best practice and a must have feature to be implemented when you set up a server for production purpose. Eventhough, this feature can be done in a Linux environment configuration, yet you have to confirm first with your network admin to ensure the switches that are linked to your server have support for network bonding. There are several bonding-modes that you can be implemented in your server environment. Below is a list of the available modes and what they do:
  • Balance-rr
    This mode provides load balancing and fault tolerance (failover) features via round-robin policy. Means that it transmits packets in sequential order from the first available slave through the last.
  • Active-Backup
    This mode provides fault tolerance features via active-backup policy. It means that once the bonding ethernet is up, only 1 of the ethernet slaves is active. The other ethernet slave will only become active if and only if the current active slave fails to be up. If you choose this mode, you will notice that the bonding MAC address is externally visible on only one network adapter. This is to avoid confusing the switch.
  • Balance-xor
    This mode provides load balancing and fault tolerance. It transmits based on the selected transmit hash policy. Alternate transmit policies may be selected via the xmit_hash_policy option.
  • Broadcast
    This mode provides fault tolerance only. It transmits everything on all slave ethernet interfaces.
  • 802.3ad
    This mode provides load balancing and fault tolerance. It creates an aggregation group that shares the same speed and duplex settings. It utilizes all slave ethernet interfaces in the active aggregator, it is based on the 802.3ad specification. To implement this mode, the ethtool must support the base drivers for retrieving the speed and duplex mode of each slave. The switch must also support dynamic link aggregation. Normally, this requires Network Engineer intervention for detailed configuration.
  • Balance-TLB
    This mode provides load balancing capabilities as the name TLB represent transmit load balancing. For this mode, if configuration tlb_dynamic_lb = 1, then the outgoing traffic is distributed according to current load on each slave. If configuration tlb_dynamic_lb = 0 then the load balancing is disabled, yet the load is distributed only using the hasd distribution. For this mode, the ethtool must support the base drivers for retrieving the speed of each slave.
  • Balance-ALB
    This mode provides load balancing capabilities as the name TLB represents adaptive load balancing. Similar to balance-tlb, except that both send and receive traffic are bonded. It receives load balancing by achieving ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond. For this mode, the ethtool must support the base drivers for retreiving the speed of each slave.


1. Preliminary Note

For this tutorial, I am using Oracle Linux 6.4 in the 32bit version. Please note that even though the configuration are done under Oracle Linux, the steps are applicable also to CentOS and Red Hat OS distro and to 64Bit systems as wwell. The end result of our example setup will show that the connection made to our bonding server will remain connected even though I've disabled 1 of the ethernet networks. In this example, I'll show how to apply network bonding using mode 1 which is the active-backup policy.


2. Installation Phase

For this process, there's no installation needed. A default Linux installation of a server includes all required packages for a network bonding configuration.


3. Configuration Phase

Before we start the configuration, first we need to ensure we have at least 2 ethernet interfaces configured in our server. To check this, go to the network configuration folder and list the available ethernet interfaces. Below are the steps:
cd /etc/sysconfig/network-scripts/
ls *ifcfg*eth*
The result is:
ifcfg-eth0 ifcfg-eth1
Notice that we currently have 2 ethernet interfaces by setup in our server which are ETH0 and ETH1.
Now let's configure a bonding interface called BOND0. This interface will be a virtual ethernet interface that contains the physical ethernet interface of ETH0 and ETH1. Below are the steps:
vi ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
MASTER=yes
IPADDR=172.20.43.110
NETMASK=255.255.255.0
GATEWAY=172.20.43.1
BONDING_OPTS="mode=1 miimon=100"
TYPE=Ethernet 
Then run:
ls *ifcfg*bon*
The result is:
ifcfg-bond0

That's all. Please notice that inside the BOND0 interface, I've included an IP address. This IP address will be the only IP address connected to our server. To proceed in the process, we need to modify the physical ethernet interface related to the BOND0 interface. Below are the steps:

vi ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes 
vi ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes 
Done. We've made the modification of the interface ETH0 and ETH1. Notice that we've removed the IP address inside both interfaces and appended MASTER = bond0. This is needed to validate that both interfaces will be virtual interfaces which are dedicated to the ethernet BOND0 interface.
To proceed with the configuration. Let's create a bonding configuration file named bonding.conf under /etc/modprobe.d . Below are the steps:
vi /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 mode=1 miimon=100 
modprobe bonding
Based on the above config, we've configured a bonding module using interface BOND0. We also assigned the bonding configuration to use mode = 1 which is active-backup policy. The option miimon = 100 represents the monitoring frequency for our bonding server to monitor the interface status in milli seconds. As per description above, this mode will provide fault tolerance features in the server network configuration.
As everything is setup, let's restart the network service in order to load the new configuration. Below are the steps:
service network restart
Shutting down interface eth0: [ OK ]
Shutting down interface eth1: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface bond0: [ OK ]

Excellent, now we have loaded the new configuration that we had made above. You'll notice that the new interface called BOND0 will be shown on the network list. You also will notice that there is no IP address assigned to the interface ETH0 and ETH1 interfaces, only the BOND0 interface shows the IP.

ifconfig
bond0 Link encap:Ethernet HWaddr 08:00:27:61:E4:88
inet addr:172.20.43.110 Bcast:172.20.43.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe61:e488/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:1723 errors:0 dropped:0 overruns:0 frame:0
TX packets:1110 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:147913 (144.4 KiB) TX bytes:108429 (105.8 KiB)
eth0 Link encap:Ethernet HWaddr 08:00:27:61:E4:88
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1092 errors:0 dropped:0 overruns:0 frame:0
TX packets:1083 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:103486 (101.0 KiB) TX bytes:105439 (102.9 KiB)
eth1 Link encap:Ethernet HWaddr 08:00:27:61:E4:88
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:632 errors:0 dropped:0 overruns:0 frame:0
TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:44487 (43.4 KiB) TX bytes:3288 (3.2 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:208 errors:0 dropped:0 overruns:0 frame:0
TX packets:208 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:18080 (17.6 KiB) TX bytes:18080 (17.6 KiB)

You can also check the bonding status via this command:

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:61:e4:88
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:c8:46:40
Slave queue ID: 0
Notice on the above that we've successfully converted the interfaces ETH0 and ETH1 into a bonding configuration using active-backup mode. Stated also now the server is using interface ETH0, ETH1 will be as backup interface .


4. Testing Phase

Now as everything is configured as expected. Let's made a simple test to ensure the configuration we made is correct. For this test, we will login to a new server (or Linux desktop) and start pinging our bonding server to see if there's an intermittent connection happen during the test. Below are the steps:
login as: root
root@172.20.43.120's password:
Last login: Wed Sep 14 12:50:15 2016 from 172.20.43.80
ping 172.20.43.110
PING 172.20.43.110 (172.20.43.110) 56(84) bytes of data.
64 bytes from 172.20.43.110: icmp_seq=1 ttl=64 time=0.408 ms
64 bytes from 172.20.43.110: icmp_seq=2 ttl=64 time=0.424 ms
64 bytes from 172.20.43.110: icmp_seq=3 ttl=64 time=0.415 ms
64 bytes from 172.20.43.110: icmp_seq=4 ttl=64 time=0.427 ms
64 bytes from 172.20.43.110: icmp_seq=5 ttl=64 time=0.554 ms
64 bytes from 172.20.43.110: icmp_seq=6 ttl=64 time=0.443 ms
64 bytes from 172.20.43.110: icmp_seq=7 ttl=64 time=0.663 ms
64 bytes from 172.20.43.110: icmp_seq=8 ttl=64 time=0.961 ms
64 bytes from 172.20.43.110: icmp_seq=9 ttl=64 time=0.461 ms
64 bytes from 172.20.43.110: icmp_seq=10 ttl=64 time=0.544 ms
64 bytes from 172.20.43.110: icmp_seq=11 ttl=64 time=0.412 ms
64 bytes from 172.20.43.110: icmp_seq=12 ttl=64 time=0.464 ms
64 bytes from 172.20.43.110: icmp_seq=13 ttl=64 time=0.432 ms
During this time, let's go back to our bonding server and turn off the ethernet interface ETH0. Below are the steps:
ifconfig eth0
eth0 Link encap:Ethernet HWaddr 08:00:27:61:E4:88
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1092 errors:0 dropped:0 overruns:0 frame:0
TX packets:1083 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:103486 (201.0 KiB) TX bytes:105439 (122.9 KiB)
ifdown eth0
Now we have turned off the services for the network interface ETH0. Let's check the bonding status. Below are the steps:
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:c8:46:40
Slave queue ID: 0
You will notice, that now the ETH0 interface does not exist in bonding status anymore. During this time, let's go back to the previous test server and check the continuous ping to our bonding server.
64 bytes from 172.20.43.110: icmp_seq=22 ttl=64 time=0.408 ms
64 bytes from 172.20.43.110: icmp_seq=23 ttl=64 time=0.402 ms
64 bytes from 172.20.43.110: icmp_seq=24 ttl=64 time=0.437 ms
64 bytes from 172.20.43.110: icmp_seq=25 ttl=64 time=0.504 ms
64 bytes from 172.20.43.110: icmp_seq=26 ttl=64 time=0.401 ms
64 bytes from 172.20.43.110: icmp_seq=27 ttl=64 time=0.454 ms
64 bytes from 172.20.43.110: icmp_seq=28 ttl=64 time=0.432 ms
64 bytes from 172.20.43.110: icmp_seq=29 ttl=64 time=0.434 ms
64 bytes from 172.20.43.110: icmp_seq=30 ttl=64 time=0.411 ms
64 bytes from 172.20.43.110: icmp_seq=31 ttl=64 time=0.554 ms
64 bytes from 172.20.43.110: icmp_seq=32 ttl=64 time=0.452 ms
64 bytes from 172.20.43.110: icmp_seq=33 ttl=64 time=0.408 ms
64 bytes from 172.20.43.110: icmp_seq=34 ttl=64 time=0.491 ms

Great, now you'll see that even though we have shutdown the interface ETH0, we are still able to ping and access our bonding server. Now let's do 1 more test. Turn back on ETH0 interface and turn off ETH1 interface.
ifup eth0
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:c8:46:40
Slave queue ID: 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:61:e4:88
Slave queue ID: 0
As the ETH0 interface was already up, let's shutdown ETH1 interface.
ifdown eth1
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:61:e4:88
Slave queue ID: 0

Now let's go back to the test server and check what happen on the continous ping made to our bonding server

64 bytes from 172.20.43.110: icmp_seq=84 ttl=64 time=0.437 ms
64 bytes from 172.20.43.110: icmp_seq=85 ttl=64 time=0.504 ms
64 bytes from 172.20.43.110: icmp_seq=86 ttl=64 time=0.401 ms
64 bytes from 172.20.43.110: icmp_seq=87 ttl=64 time=0.454 ms
64 bytes from 172.20.43.110: icmp_seq=88 ttl=64 time=0.432 ms
64 bytes from 172.20.43.110: icmp_seq=89 ttl=64 time=0.434 ms
64 bytes from 172.20.43.110: icmp_seq=90 ttl=64 time=0.411 ms
64 bytes from 172.20.43.110: icmp_seq=91 ttl=64 time=0.420 ms
64 bytes from 172.20.43.110: icmp_seq=92 ttl=64 time=0.487 ms
64 bytes from 172.20.43.110: icmp_seq=93 ttl=64 time=0.551 ms
64 bytes from 172.20.43.110: icmp_seq=94 ttl=64 time=0.523 ms
64 bytes from 172.20.43.110: icmp_seq=95 ttl=64 time=0.479 ms

Thumbs up! We've successfully configured and proven our bonding server manages to cater the disaster recovery scenario on a network fail over condition.


Thanks to :https://www.howtoforge.com/tutorial/how-to-configure-high-availability-and-network-bonding-on-linux/

Monday, May 2, 2016

Raid definition and configuration


What is RAID 

RAID stands for Redundant Array of Independent Disks (full form of raid). RAID is a method of combining several hard disks into one unit or group.It  is a data storage Virtualization technology that combines multiple disk components into a logical unit for the purposes of data redundancy or performance improvement. It also offers fault tolerance role and higher throughput levels than a single hard disk or group of independent hard disks. RAID levels 0, 1, 5, 6 and 10 are the most popular configurations.


RAID Step and Configurations

Raid 0 Ittechpoint

RAID 0 splits data across disks, resulting in higher data throughput. The performance of this configuration is extremely high, but a loss of any disk in the array will result data loss. This level is commonly known to as striping.
Minimum number of disks required are: 2
Performance: High
Redundancy: Low
Efficiency: High

Advantages:

  • High performance
  • Easy to implement
  • Highly efficient (no parity overhead)

Disadvantages:

  • No redundancy
  • Limited business use cases due to no fault tolerance
Raid 1 ittechpoint

RAID 1 writes all data to two or more disks for 100% redundancy: if either disk fails, no data is lost. Compared to a single disk, RAID 1 tends to be fast on reads, slow on writes. This is a good entry-level redundant configuration. However, since an entire disk is a duplicate, the cost per MB is high. This is commonly known to as mirroring.
Minimum number of disks required are: 2
Performance: Average
Redundancy: High
Efficiency: Low

Advantages:

  • Fault tolerant
  • Easy to recover data in case of disk failure
  • Easy to implement

Disadvantages:

  • Highly inefficient (100% parity overhead)
  • Not scalable (becomes very costly as number of disks increase)

RAID 5 stripes data in a block level across several disks, with parity equality distributed among the disks. The parity information allows recovery from the failure of any single disk. Write performance is rather quick, but due to parity data must be skipped on each disk during reads, reads are slower. The low ratio of parity to data means low redundancy overhead.
Minimum number of disks required are: 3
Performance: Average
Redundancy: High
Efficiency: High

Advantages:

  • Fault tolerant
  • High efficiency
  • Best choice in multi-user environments which are not write performance sensitive

Disadvantages:

  • Disk failure has a medium impact on throughput
  • Complex controller design
raid 6 Ittechpoint

RAID 6 is an upgrade version from RAID 5: data is striped in a block level across several disks with double parity distributed among the disks. As in RAID 5, parity information allows recovery from the failure of any single disk. The double parity gives RAID 6 additional redundancy at the cost of lower write performance (read performance is the same), and redundancy overhead remains low.
Minimum number of disks required are: 4
Performance: Average
Redundancy: High
Efficiency: High

Advantages:

  • Fault tolerant – increased redundancy over RAID 5
  • High efficiency
  • Remains a great option in multi-user environments which are not write performance sensitive

Disadvantages:

  • Write performance penalty over RAID 5
  • More expensive than RAID 5
  • Disk failure has a medium impact on throughput
  • Complex controller design
raid 0+1 ittechpoint

RAID 0+1 is a mirror (RAID 1) array whose segments are striped (RAID 0) arrays. This configuration combines security of RAID 1 with an extra performance boost from the RAID 0 striping.
Minimum number of disks required are: 4
Performance: Very High
Redundancy: High
Efficiency: Low

Advantages:

  • Fault tolerant
  • Very high performance

Disadvantages:

  • Expensive
  • High Overhead
  • Very limited scalability
raid 10 ittechpoint

RAID 10 is a striped (RAID 0) array whose segments are mirrored (RAID 1). RAID 10 is a popular configuration for today environments where high performance and security are required. In terms of performance it is similar to RAID 0+1. However, it has superior fault tolerance and rebuilds performance.
Minimum number of disks required are: 4
Performance: Very High
Redundancy: Very High
Efficiency: Low

Advantages:

  • Extremely high fault tolerance – under certain circumstances, RAID 10 array can sustain multiple simultaneous disk failure
  • Very high performance
  • Faster rebuild performance than 0+1

Disadvantages:

  • Very expensive
  • High overhead
  • Limited scalability
raid 50 ittechpoint

RAID 50 combines RAID 5 parity and stripes it as in a RAID 0 configuration. But high in cost and complexity, performance and fault tolerance are superior to RAID 5.
Minimum number of disks required are: 6
Performance: High
Redundancy: High
Efficiency: Average

Advantages:

  • Higher fault tolerance than RAID 5
  • Higher performance than RAID 5
  • Higher efficiency than RAID 5

Disadvantages:

  • Very expensive
  • Very complex / difficult to implement
raid 60 ittechpoint

RAID 60 combines RAID 6 double parity and stripes it as in a RAID 0 configuration. Although high in cost and complexity, performance and fault tolerance are superior to RAID 6.
Minimum number of disks required are: 8
Performance: High
Redundancy: High
Efficiency: Average

Advantages:

  • Higher fault tolerance than RAID 6
  • Higher performance than RAID 6
  • Higher efficiency than RAID 6

Disadvantages:

  • Very expensive
  • Very complex / difficult to implement
- See more at: http://www.ittechpoint.com/2015/04/raid-definition-and-configuration.html#sthash.Wvc0OAMF.dpuf

Installing VMWare ESXi 5.5 on a HP ProLiant Microserver Gen8

Installing VMWare ESXi 5.5 on a HP ProLiant Microserver Gen8

Introduction

In this tutorial we will set-up a VMWare ESXi 5.5 as a Testlab for serving virtual machines on a small-scale server system from Hewlett Packard, a ProLiant Microserver G8 aka Gen8.
Two methods are presented below. I strongly advise you to take the second variant presented. The first one, that should work ended in a Red Screen of Death!

Hardware modifications

As the basic machine is already a nice one, but not powerful enough equipped with hardware, I upgraded the server with additional components directly when buying the Gen8. Therefore I went to the smallest available model of the HP ProLiant MicroServer Gen8 servers, a G1610T model.
I bought a G1610T and in addition the following hardware:
  • 1x Intel Xeon E3-1265L v2, a quad-core processor (2,5GHz, Sockel 1155, L3 Cache, 45 Watt, BX80637E31265L2) and a turbo frequency of up to 3.5 GHz
  • 2x Kingston KTH-PL316E/8G DDR3-RAM with each 8 GB capacity (1600MHz, PC3-12800 and ECC!)
  • 2x Seagate Barracuda ST3000DM001 SATA III hard drives with each 3TB (7200rpm, 64MB Cache)
  • 1x USB thumb drive (found one in the drawer…)
The main pros of this server are:
  • It includes a cheap but fine hardware hd controller, a HP Smart Array B120i with a throughput of 91.4K IOPS.
  • The form factor! – It is in fact an Ultra Micro Tower.
  • Less than 150 W power consumption – even with four HDDs, it will stay with less than 100 W!
  • Two 1Gb Ethernet ports and one extra dedicated iLO 4 network port.
  • Internal micro sd and usb port to use them as additional hard drive ports.
I will not cover the hardware installation here in detail, but just link to other pages that mentioned working CPU / Ram upgrades. Up to today me haven´t seen anyone who managed to have 32 GB of RAM working on the G8 server; which would give a nice opportunity for the hosting. But I am quite sure with a wider distribution of 16 GB ECC memory module; one will give it a try and make it work. It might be that HP will provide some kind of BIOS update to officially support more total memory.
See the following pages for more information about the servers:

Additional preparatory work

Configure HP Integrated Lights-Out (iLO)

You should set-up iLO before the actual installation process, as this will make your further server life easier and of course because this tutorial makes use of iLO. This does not mean, you cannot go without the iLO, but I suggest you to give it a try. Just check my previous post about “iLO on the HP ProLiant Microserver Gen8” if you need any help regarding iLO.

Download the VMware ISO image

Download the modified current version of ESXi from the VMWare web page. You will be forwarded from HPs to VMwares web page. You have to login or create a login during this process. The ISOs name should be similar to: VMware-ESXi-5.5.0-Update1-1746018-HP-5.74.27-Jun2014.iso

Upgrading to latest available firmware

My server was delivered with a 1.3 version of HP Intelligent Provisioning Online update. The current version, while writing this tutorial is already v. 1.5, so we will cover this firmware upgrade here as well.
Open the Remote Console, found under “Remote Console” > “Java Integrated Remote Console” (Java IRC), that provides remote access to the system through iLO.
HP Smart Deployment
HP Smart Deployment
Open the “Maintenance section” on the right and check “Firmware-Upgrade” there.
Maintenance section
Maintenance section
Click on “Firmware-Update” here and install all available updates. This process will take a while.

Creating disk array

In the “HP Smart Storage Administrator” (SSA) (also available in the Maintenance section) you have to create a hard disk array. I went for 2×3 TB HDDs as Raid 1 here. As I am not planning to use the disk array as boot volume but for the provision of virtual machines, and install ESXi on the USB thumb drive, we can exceed the 2 TiB limit here.
Smart Storage Administrator
Smart Storage Administrator

Installation of VMWare ESXi 5.5
(Method 1 – a non-working solution!)

After installation of the additional hardware, upgrading the iLO and creating a disc array, we can now initiate the actual installation of our hypervisor system.
  1. Log in again to your iLO web view and open the mentioned remote console again.
    HP Intelligent Provisioning
    Server iLO Management
    HP Intelligent Provisioning

  2. This time, choose “Config and Install”
    HP Intelligent Provisioning
    “Config and Install” or “Service”?
  3. In the next step, choose “minimum power consumption”, “skip update” and “Keep Array configuration” as we already managed this procedure just before.
  4. On the next screen, just choose “VMWare ESXi/vSphere-Image”, “Manuell Install” and “Drive media” here.
    Install from Image file
    Install from Image file
  5. From the “Virtual Drives” in your Remote Console, just check “Image File CD-Rom/DVD” and select the downloaded ISO file there.
    Mounting media
    Mounting media
  6. Confirming on Step 4 will install the VMWare ESXi Server on your HP system. This will take some minutes to complete.
  7. After the installation the server will be rebooted. This will take additional > three minutes. You can stay connected to the iLO that time!
  8. The server will restart directly into the ESXi Server, displaying an IP address where it is available through.
This is, how it should work! – In my case, this way wasn´t working properly, after rebooting, I ended on a Red Screen of Death! So I went for the second variant, which is explained in the next section.

Installation of VMWare ESXi 5.5
(Method 2, working solution!)

If you encounter errors during the first method,  please check the following variant, that might be even better, as you are actual performing a manual installation.

Choosing your boot device

In fact there are several ways to boot the installation media from, I will just outline two ways to boot from.
  1. You may connect an USB thumb drive with the installation media preloaded as explained on the very short article “Preparing ESXi boot image for USB Flash drive“. Just plug it in.
  2. Choose to add the .iso file as a virtual drive to the iLO remote console. Go to “Virtual Drives” > “Image File CD-Rom/DVD” > Select the installation .iso file you just downloaded before.
    Choosing an image file as birtual drive
    Choosing an image file as virtual drive
Both of the ways should work for the following install process.

The main installation process

  1. Boot your server and hit “F11” to go to the boot menu.
    HP Proliant Boot Screen
    HP Proliant Boot Screen
  2. In the menu, you can choose
    1. “USB DriveKey”, if you filled a USB thumb drive before with the ISO file.
      For this option, you might take a look to the “Red Screen of Death” information to select the right boot device (it is in fact the first external drive!).
    2. “One Time Boot to CD-ROM”, if you added the virtual drive before.
    Boot Menu
    Boot Menu
  3. After selecting your device, the pre-installation screen will be shown. If you made it up here, the rest of the process should work properly. Just hit “Enter” here to proceed or wait a couple of seconds for the installer to go further.
    Pre-Install Screen
    Pre-Install Screen
    Loading data - ESXi installation
    Loading the necessary data for ESXi installation
  4. In the next step, you will be welcomed to the “Installation of ESXi 5.5.0”. Just hit “Enter” to proceed.
    Installation starts here
    Installation of ESXi starts here
  5. Accept the EULA by pressing “F11”.
    Expect the EULA
    Accept the EULA
  6. Now we have to select a disk to install the ESXi System to. There a listed two types of storage devices. Local and Remote ones. The locals include some volumes:
    1. HP Logical Volume with 2.73 TiB (on the Raid controller)
    2. USB 2.0 Flash Disk – The internal USB thumb drive with 3.73 GiB where we will install the ESXi to.
    3. An HP iLO device “LUN 00 Media 0”, that is our virtual CD-ROM drive, we mounted the ISO to.
    As we will use the RAID Logical Volume as a data storage for the virtual machines later, we will take the USB 2.0 Flash Disk instead.
    Choose the destination
    Choose the destination
  7. Now, we choose our language. In my case, this is German language.
    Language settings
    Language settings
  8. The installer is asking for a root password. Just choose any here – it is suggested to add a new account later in the vSphere Client after the installation.
    Providing root password
    Providing root password
  9. With “F11” we can now start the installation, Cancel with “ESC” or going back, taking changes with “F9”. Double check, that you choose the right device and proceed.
    Confirm Install
    Confirm Install
  10. The installation itself took about 10-12 minutes. Just wait and go for a coffee.
    Installation in progress
    Installation in progress
  11. Next showing the installation was completed successfully. At this stage, you should remove the installation media. Either remove the external USB thumb drive or the virtual drive CD-ROM and press “Enter” to reboot.
    Installation complete
    Installation complete

Post-installation

First boot

The server will reboot and this will take a couple of minutes. When the Starting EXSi Server 5.5 Screen is shown, we are almost done. When the last (yellow) screen is presented, the server is ready for the deployment of virtual machines. You will see the URL that you can access in the middle left of the screen.
Rebooting server
Rebooting server
Starting ESXi Server 5.5
Starting ESXi Server 5.5
ESXi Server ready
ESXi Server ready

Installing vSphere Client

Open the presented URL. There is a link presented to download the vSphere Client. Download the VSphere Client and install it. Open the Client and enter the IP address. The user is “root” and the password the one you entered during the installation process.

Creating a Data Storage

After Login to the Server, you need to add a data storage. The following message (in my case in German) should be similar for you:
“The ESXi-Host does not provide a persistent storage” and a bit below “To add storage, click here”, like shown in the following picture.
Create a storage device
Create a storage device
Choose VMF-5 to add the 2TB+ support during this process.
Add a new storage
Add a new storage on the raid controller
Add a new storage
Choose VMF-5
You are now able to work with your ESXi Server. Have fun with your teststand virtualization server!
- See more at: http://blog.ittechpoint.com/2015/10/installing-vmware-esxi-55-on-hp-ProLiant-Microserver-Gen8.html#sthash.okctHiJR.dpuf