I had to be the first one to make a really bad joke.
Everyone will admit, how to efficiently back up your VM’s is a hot topic. Remember VDP is VMware’s product, but a lot of EMC technical people should be able to let you know right away how it works. VDP will be an excellent fit for a lot of customers with environments where they can’t spend extra on “virtual” backups.
Here are some of my favorite things in the new VDP.
- First it is built right into the new vSphere Web Client
- A simple wizard guides you through making the jobs.
- VDP uses Change block tracking to accelerate full restores.
- Integrated Self-service File level restore. What is better than file level restore? No one opening a ticket to ask you to do it!
The other stuff
Someone will eventually ask what is the difference between VDP and Avamar?
- Max # VMs: 100
- Storage Pool: up to 2TB
- Replication (DR): None
- Image-Level backup only
So to be 100% honest I have had this book on my desk for several months. Just staring at me. Calling my name. VMware press provided this copy to me along with Mike Laverick’s SRM book and so I am finally going to review the first one.
Cody Bunch does an amazing job of breaking down one of the most mystifying yet powerful products hidden in the VMware portfolio. VMware vCenter Orchestrator is almost mythical in the promises of automation of typical tasks of a vSphere administrator. While you can bang your head against the wall for weeks trying to figure out how to properly setup the vOrchestrator server and client I was able to use Cody’s guidance to have to operational and running test workflows in just a few hours (I am a slow reader).
I can’t stress enough the need for automation and orchestration in today’s virtual machine environment. The business is demanding more and more from the Virtualization team and in order to deliver vCenter Orchestrator is a good start since you probably already OWN it.
Hopefully soon there will be an update with information on the vApp version of Orchestrator. Check it out here on Amazon or your favorite book reseller.
I was meeting with a customer today and had to stop for a second when they said they were using 10 TB datastores in vSphere 4.1.
At first I was going through my head of maybe NFS? No they are an all block shop. Oh wait yeah, extents. They were using 2 TB -512 byte luns to create a giant Datastore. I asked, why? The answer was simple, “so we only manage one datastore.”
I responded with well check out Storage DRS in vSphere 5! It gives you that one point to manage and automatic placement across multiple datastores. Additionally you actually can find which VM lives where, and use Storage Maintenance mode to do storage related maintenance. Right now they are locked into using extents. If they change their datastores into a Cluster the gain flexibility while not losing the ease of management.
I wanted to use the opportunity to list some information I think about Extents with VMware.
- Extents do not equal bad. Just have the right reason to use them, and running out of space is not one.
- If you lose one extent you don’t lose everything, unless that one is the first extent.
- VMware places blocks on extents in some sort of even fashion. It is not spill and fill. While not really load balancing you don’t kill just one lun at a time.
An extent with a datastore is like a stack of luns. Don’t knock out the bottom block!
Some points about Storage DRS.
- Storage DRS places VMDK’s based on IO and Space metrics.
- Storage DRS and SRM 5 don’t play nice, last time I checked (2/13/12).
- Combine Storage DRS with Storage Policy and you have a really easy way to place and manage VM’s on the storage. Just set the policy and check if it is compliant.
A Storage DRS cluster is multiple datastores appearing as one.
Some links on the topics:
Some more information from VMware on Extents
More on Storage DRS (SDRS)
In conclusion, SDRS may be removing some of the last reasons to use an extent (getting multiple lun performance with single point of management). Add that to being able to have up to 64 TB Datastores with VMFS and using extents will become even rarer than before. Unless you have another reason? Post it in the comments!
Sometimes I am sitting up late at night and I have a thought of something I think would be cool, like if x and y worked together to get z. This time I thought this was good enough to blog about. Now I want to stress that I do not have any special insight into what is coming. This is just how I wish things would be.
Today there are two end user portals from VMware. The vCloud Director for self-service cloud interface and the View Manager access point for end-users to access Virtual Desktops. Each interface interacts with one or more vCenter instances to deploy, manage, and destroy virtual machines. Below is a way over simplified representation of how View, vCloud Director (plus Request Manager) relate to the user experience. I think maybe there is a divide when there does not need to be (someday).
What if vCloud director could be used in the future to be the one stop user interface portal. Leveraging vCloud Request Manager, vCD could deploy cloud resources, Desktops or Servers or both. vCloud Director would be the orchestration piece for VMware View. Once the Request for a desktop is approved the entitlement to the correct pool is automatically given. If extra desktops are needed the cloning begins. vCloud Director will learn to speak the View Composer’s language, providing the ever elusive ability to use linked clones with vCD. vCloud Director with this feature could be great for lab and test/dev environments. The best part is operationally there is one place to request, deploy, manage all virtual resources from the end-user perspective. This could eliminate the ambiguity for a user (and service providers) on how to consume (and deliver) resources. This has implications on how IaaS and DaaS would be architected.
Now some drawbacks
You might say, hey, Jon you are going to make me buy and run vCD just to get VDI? No. The beauty of the API’s is each product could stand alone or work together (in my Vision of how they should work). Maybe even leverage Composer with vCD without View or Request Manager with View without vCD.
One Cloud Portal to rule them all.
So I often have epiphany teasers while driving long distances or stuck in traffic. I call them teasers because they are never fully developed ideas and often disappear into thoughts about passing cars, or yelling at the person on their cell phone going 15 MPH taking up 2 lanes.
Here is some I was able to save today (VMware related):
1. What if I DID want an HA cluster to be split in two different locations, Why?
2. Why must we over-subscribe iSCSI vmkernel ports to make the best use of the 1gbe phyical nics. Is it a just the software iSCSI in vSphere? Is just something that happens with IP storage? I should test that sometime…
3. If I had 10 GB nics I wouldn’t use them on Service Console or Vmotion that would be a waste. No wait, VMotion ports could use it to speed up your VMotions.
4. Why do people use VLAN 1 for their production servers? Didnt’ their Momma teach em?
5. People shouldn’t fear using extents, they are not that bad. No, maybe they are. Nah, I bet they are fine, how often does just 1 lun go down. What are the chances of it being the first lun in your extent? Ok maybe it happens a bunch. I am too scared to try it today.
Equallogic PS Series Design Considerations
VMware vSphere introduces support for multipathing for iSCSI. Equallogic released a recommended configuration for using MPIO with iSCSI. I have a few observations after working with MPIO and iSCSI. The main lesson is know the capabilities of the storage before you go trying to see how man paths you can have with active IO.
- EqualLogic defines a host connection as 1 iSCSI path to a volume. At VMware Partner Exchange 2010 I was told by a Dell guy, “Yeah, gotta read those release notes!”
- EqualLogic limits the number of hosts in the to 128 per pool or 256 per group connections in the 4000 series (see table 1 for full breakdown) and to 512/2048 per pool/group connections in the 6000 series arrays.
- The EqualLogic MPIO recommendation mentioned above can consume many connections with just a few vSphere hosts.
I was under the false impression that by “hosts” we were talking about physical connections to the array. Especially since the datasheet says “Hosts Accessing PS series Group”. It actually means iSCSI connections to a volume. Therefore if you have 1 host with 128 volumes singly connected via 1 iSCSI path each, you are already at your limit (on the PS4000).
An example of how fast vSphere iSCSI MPIO (Round Robin) can consume available connections can be seen this this scenario. Five vSphere hosts with 2 network cards each on the iSCSI network. If we follow the whitepaper above we will create 4 vmkernel ports per host. Each vmkernel creates an additional connection per volume. Therefore if we have 10 300 GB volumes for datastores we already have 200 iSCSI connections to our Equallogic array. Really no problem for the 6000 series but the 4000 will start to drop connections. I have not even added the connections created by the vStorage API/VCB capable backup server. So here is a formula*:
N – number of hosts
V – number of vmkernel ports
T – number of targeted volumes
B – number of connections from the backup server
C – number of connections
(N * V * T) + B = C
|Equallogic PS Series Array
Use multiple pools within the group in order to avoid dropped iSCSI connections and provide scalability. This reduces the number of spindles you are hitting with your IO. Using care to know the capacity of the array will help avoid big problems down the road.
*I have seen the connections actually be higher and I can only figure this is because the way EqualLogic does iSCSI redirection.
So my first experience trying to deploy the new vShield Zones security product included in VMware’s vSphere.
First vShield Zones is different than VMsafe. The way I understand it is the vShield Zones is like your border security but inside of the vSphere. It divides and segregates networks and virtual machines. The VMsafe is end point protection built into the kernel. Reflex has the first VMsafe certified appliance but I have not had a chance to try it yet. (Need more hardware hint hint)
The User Guide talks about downloading an appliance but you actually download an ISO then run an installer that unzips a folder with the 2 appliances. One is the vShield Zones Manager and the other is the actual firewall. The extra step of using the ISO image was annoying buy I guess I am just a whiner. On a super basic level, (I am not here to re-write the user guide) Import the appliance for the manager then import the firewall. Convert the firewall into a template. The Manager appliance takes care of the rest. Note: Internet Explorer 8 and the Manager Web UI don’t work. I used IE 7 just fine.
- You won’t get this far in IE8
- Deploying the vShield is straight forward. It creates new vSwitches and port groups and the Manager UI indicates which network is protected and unprotected. This is not in Virtual Center still in the Web Interface.
- As you deploy the vShield enjoy watching the tasks in vCenter.
All things considered it is a good product I don’t have enough throughput on my little lab machine to really test any impact using vShields would have on performance. If you are a Service Provider I think it would be a great add on to ensure some separation of virtuals.
So I was updating some of my blog posts on the esxcfg-* commands with any changes in ESX 4. I wrote earlier I did not know much about the esxcfg-advcfg command. Since writing that post at the end of 2008, I found Duncan Epping used esxcfg-advcfg in 3.5 to set the option rescan all the Hba’s. I thought this was a great shortcut and decided to try it out in vSphere but:
[root@esx4 ~]# esxcfg-advcfg -s 1 /Scsi/ScsiRescanAllHbas
Exception occured: Unable to find option ScsiRescanAllHbas
So I looked through vCenter 4 and did not find the option under Scsi I looked around some in the other Advanced Options and it is no where to be found.
Has this been removed or moved somewhere else? If you know hit me up on twitter @2vcps
So today I got around to putting ESXi 4 on my spare box at home. I first deployed a new virtual server and decided to use the thin provisioning built into the new version. After getting everything all setup. I was suprised to still see this.
I was like DANG! that is some awesome thin provisioning. I was more thinking something had to be wrong. A 42 GB drive with Windows 2008 only using 2.28KB that is sweet! I thought for sure since I had not seen this screen on the information of the VM it had already refreshed. It was too good to be true though I clicked the Refresh Storage and it ended up like this. Which made alot more sense for a fresh and patched Windows install. So far this leads to my first question, why the manual refresh? Should this refresh automatically when the screen redraws?