Archive for February, 2006
For people that have been running for a while with VMware, they know we always had 2 choices of NIC interfaces in our Virtual Machine. The vlance (AMD PCNET 32) nic and the vmxnet (VMware NIC). This disappeared as the current vlance card became ‘intelligent’, when you have VMware Tools running in your VM, automatically optimized code is being used to talk to the NIC interface. If VMware Tools is not loaded is just runs with basic AMD emulation.
But today (I am still learning everyday) I found out we have a new choice. We can also have an Intel PRO 1000/MT in our Virtual Machines. I have tested it with VMware Workstation 5.5 and ESX3(beta2), but am very sure this also works in VMware Player and VMware Server.
So how do we get this Intel Pro 1000/MT card in our VM, well quite simple. Edit your .VMX file and make sure the Ethernet configuration has this line in it:
ethernet0.virtualDev = “e1000”
Under most products this works easily, except for ESX3(beta2). I found that in my ESX3/VC2 environment, some stupid engine in the background every time changes my configuration back. After some long frustrating testing, I found a solution. How? Well change the .VMX file with vi. Save it but leave VI open. Now power on the Virtual Machine. VI will keep the file locked, so ESX can not change it back and voila.. it works.
So now the question of course is, why should I do this? Well in my case I started playing with this, because I had a virtual appliance that did network analysis. And the software company only had it configured for an Inter Pro 1000 NIC, so I had to make this change.
Now aware of this change, I was of course curious if there was a performance difference between the vlance and intel pro NIC. Well I am not allowed to publish benchmark tests, but I can tell you there is. But in different scenarios I had different winners. TCP pure network the vlance was faster, UDP pure network the Intel won by far. Doing file copying (so disk and network) the intel won frequent as well. So if you have a VM that really needs to squeeze every bit of power out of a NIC, go have a test and see which one works for you. Of course not only NIC speed should be considered in this equation, but also the difference in CPU usage.
I also think I know why the Intel Pro 1000 card was introduced in the VM. All 64bit Virtual Machines use this card by default. As there is a 64 bit driver available for this NIC and not for the Vlance card. But in my tests I was able to also put this NIC in my 32bit virtual machines.
This is the first time (and probably the only time) I wish I am not a VMware employee. Anyone that is not an employee, can make an awesome cool Virtual Appliance and win $100.000!!!
So, what is this whole Virtual Appliance about. Well like in the real world, you can buy appliances right; like Firewall in a box, netscalers, packateers, etc… Black boxes (or what ever color they have). You turn them on, they have no monitors, keyboard or mouse, but just a simple LCD screeen, and almost always work
The Virtual Appliance should be the same thing. A Virtual Machine, that you just power on, and it will give you a certain (hopefully usefull) application service. I think like a real appliance the Virtual Appliance should really not have a Console. On the Console it should, like a true appliance, just display this small LCD screen information, mainly with it’s IP address on it, so you know where to surf to configure the thing.
Well I have tons of ideas for Virtual Appliances, but as I said, I am excluded from the compitition So if you do want to participate and need an Idea, drop me a note
More about the compitition here
As you probably know, VMware ESX server only support SCSI and SAN storage. ESX3 will start adding support for iSCSI and NAS as well, but for me playing with ESX on my home servers, this still is not a good option. On the VMware discussion forums i read some threads that some SATA controllers where working, as they use the same drivers as their SCSI counterparts.
As I am not afraid to take some risk I started searching on Ebay and found a MegaRAID SATA 300-8x card. This is one of the latest models, supporting SATA2 disks, has 128mb cache on board. I was able to buy it for ‘just’ $300, which really was a good deal if you look around.
So a few days later I received the controller and tried to toss it into my nice AMD dual core server… with no luck The Controller is an PCI-X ONLY card, so it does not go into a simple PCI slot… mmm That was a disappointment. Ok, back to Ebay, going to find a motherboard with PCI-X slots and enough processing power to run a decent ESX server. So I found this Intel Server with dual XEON, PCI-X slots, and only 512mb of RAM. The lack of RAM was not really an issue, as I have plenty is DDR dims laying around. So, made sure I won the server on ebay, costing me 651 pounds. Mmmm, this really started to become a bit more expensive exercise than I initially thought.
Picked up the server next day, and jippie the Controller card actually went in. I pulled 4gb of ram out of an other server and stuck it into this new intel Xeon box… NO luck again! This now being a server mother board, it wants only registered ECC memory, not the standard DDR dims that go into our desktop PCs. So you probaly guessed it, me back on Ebay, trying to find some additional memory. I found 2x 1GB ecc registered memory, so again an extra investment
So finally I was ready; I had my controller card, build a new server around it and was ready to run. I attached 2x 500GB SATA2 drives to it, tossed them into RAID 0+1 (Striping) and was ready to go. First I tested with ESX 2.5.2 and after making a modification in /etc/vmware/vmware-device.map to reflect that my card does work, everything worked great. I was able to create a VMFS on my SATA2 stripe set and start creating VMs.
Now, looking ahead of time, I of course also wanted to make sure this works on ESX3. Being a VMware employee has some benefits, including early access to betas So I can tell you every thing is perfectly running fine on my ESX3 (beta) server as well, after the same modification in the vmware-device.map file.
So a long story just to tell you; you can run the MegaRAID cards under ESX But I also just wanted to share some frustrations with you
MegaRAID 300 8x card: $300
Dual Xeon Server: 651 pounds
2x 1GB registered ECC memory: 173 pound
Running ESX on SATA2: priceless
Update for ESX 3.5
Well as today ESX 3.5 is released I updated my server and still want to have my nice SATA Megaraid controller to work. The setup has changes a bit, instead of modifying the vmware-device.map file you needto do the following steps:
… and everything is still working fine again (I am still booting of a normal IDE HDD, but can now use all my SATA disks for VMFS)
The list of Virtual Appliances is growing and growing Not only small apps are being made available as Virtual Appliances, IBM is now also offering the best database server on this planet available in a Virtual Machine A prepacked SuSe with IBM DB2 installed. Jippie