VMware vCenter Operations Manager and Pure Storage Rest API

I was playing with the REST API and Powershell in order to provision vSphere Datastores. I started to think what else could we do with all the cool information we get from the Pure Storage REST API?
I remembered some really cool people here and here had used the open HTTP Post adapter. So I started to work on how to pull data out of the Flash Array and into vCOPS.

Pure Dashboard


We already get some pretty awesome stats in the Pure web GUI. What we don’t get is the trends and analysis. Also I don’t see how my data reduction increases and decreases over time. Also I don’t get stats from multiple arrays.

First Dashboard with Array Stats, Heat Map, and Health based in vCops Baseline


Array Level Stats

First each of these scripts require Powershell 4.0.
1. Enter the Flash Array Names in the variable for $FlashArrayName. You can see I have 4 arrays in the Pure SE Lab.
2. I create a file with the credential to vCOPS. Since we are going to schedule this script to run every few minutes you need to create this file. More information on creating that credential here http://blogs.technet.com/b/robcost/archive/2008/05/01/powershell-tip-storing-and-using-password-credentials.aspx

You MUST read and do that to create the cred.txt file in c:\temp that I reference in the script.

3. Change the $url variable to be the IP or name of your vCOPS UI server.
4. Don’t forget to modify the Pure Flash Array and Password in each script.

Find it on GitHub https://github.com/2vcps/purevcops-array

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @('pure1','pure2','pure3','pure4')

$AuthAction = @{
password = "pass"
username = "user"

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3)
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval3

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Pure FlashArray"

# sets resource description
$resdesc = "<flasharraydesc>"

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn't specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body
#write-host $resname
#write-host $custval2 "=" $custval "on" $custval3
ForEach($element in $FlashArrayName)
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureArray = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?space=true" -WebSession $Session
ForEach($FlashArray in $PureStats) {

$wIOs = $FlashArray.writes_per_sec
$rIOs = $FlashArray.reads_per_sec
$rLatency = $FlashArray.usec_per_read_op
$wLatency = $FlashArray.usec_per_write_op
$queueDepth = $FlashArray.queue_depth
$bwInbound = $FlashArray.input_per_sec
$bwOutbound = $FlashArray.output_per_sec
ForEach($FlashArray in $PureArray) {

$arrayCap =($FlashArray.capacity)
$arrayDR =($FlashArray.data_reduction)
$arraySS =($FlashArray.shared_space)
$arraySnap =($FlashArray.snapshots)
$arraySys =($FlashArray.system)
$arrayTP =($FlashArray.thin_provisioning)
$arrayTot =($FlashArray.total)
$arrayTR =($FlashArray.total_reduction)
$arrayVol =($FlashArray.volumes)

post-vcops($wIOs)("Write IO")($faName)
post-vcops($rIOs)("Read IO")($faName)
post-vcops($rLatency)("Read Latency")($faName)
post-vcops($wLatency)("Write Latency")($faName)
post-vcops($queueDepth)("Queue Depth")($faName)
post-vcops($bwInbound)("Input per Sec")($faName)
post-vcops($bwOutbound)("Output per Sec")($faName)

post-vcops($FlashArray.data_reduction)("Real Data Reduction")($faName)
post-vcops($FlashArray.shared_space)("Shared Space")($faName)
post-vcops($FlashArray.snapshots)("Snapshot Space")($faName)
post-vcops($FlashArray.system)("System Space")($faName)
post-vcops($FlashArray.thin_provisioning)("TP Space")($faName)
post-vcops($FlashArray.total)("Total Space")($faName)
post-vcops($FlashArray.total_reduction)("Faker Total Reduction")($faName)



For Volumes

Find it on GitHub https://github.com/2vcps/purevcops-volumes

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @('pure1','pure2','pure3','pure4')

$AuthAction = @{
password = "pass"
username = "user"

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3,$custval4)
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip or name>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Flash Volumes"

# sets resource description
$resdesc = $custval4

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn't specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval3"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body

write-host $custval,$custval2,$custval3
ForEach($element in $FlashArrayName)
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureVolStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/volume?space=true" -WebSession $Session
ForEach($Volume in $PureVolStats) {
$adjVolumeSize = ($Volume.Size /1024)/1024/1024

post-vcops($Volume.Name)("Volume Size")($adjVolumeSize)($faName)
post-vcops($Volume.Name)("Volume Data Reduction")($Volume.data_reduction)($faName)
post-vcops($Volume.Name)("Shared Space")($Volume.shared_space)($faName)
post-vcops($Volume.Name)("Total Reduction")($Volume.total_reduction)($faName)
post-vcops($Volume.Name)("Thin Provisioning")($Volume.thin_provisioning)($faName)

Once each of the scripts is working schedule them as a task on a windows server. I do one for volumes and one for arrays and run them every 5 minutes indefintely. This will start to dump the data into vCOPS.

Now you can make Dashboards.

Creating Dashboards


Login to the UI for vCOPS. You must by in the custom UI, the standar UI hides all of the cool non-vSphere customization you can do.


Go to Environment –> Environment Overview


Expand Resource Kinds


This lets you know that data is being accepted to the array. Other than the Powershell script bombing out and failing this is the only way you know it is working. Now for a new Dashboard.

Click Dashboards -> Add


Drag Resources, Metric Selector, Metric Graph and Heat Map to the Right


Name it and Click OK

Adjust the Layout


I like a nice Column for information and a bigger display area for graphs and heat maps. Adjust to your preference.

Edit the Resources Widget


Edit the Name and filters to tag


Now we just see the Flash Arrays


Select your Resource Provider I named mine Lab Flash Arrays as the Providing Widget for the Metric Selector. Also Select the Lab Flash Arrays and Metric Selector as the Providing Widgets for the Metric Graph.

Edit the Metric Graph Widget by clicking the gear icon


I change the Res. Interaction Mode to SampleCustomViews.xml. This way when I select a Flash Array the Graph does show up until I double click the Metric in the Metric Selector. You are of course free to do it as you like.

The Heat Map


Edit the heat map and you will find tons of options.


Create a Configuration


Name the New Configuration


Group by and Resource Kinds


Group by the Resource Kind and then select Pure Flash Array in the drop down.

Select the Metric to Size the Heatmap and Color the Heatmap


Adjust the colors if you think Read and Green are boring


Save the Config!


Look! A cool new heatmap


Do this for all the metrics you want to have as a drop down in teh dashboard.

Obviously there are a lot more things you can do with the Dashboards and widgets. Hopefully this is enough to get you kicked off.

A Brand New Dashboard


Staying through Thursday at VMworld? Come to PureStorage Evolve

When: Thursday August 28th
1:00pm – 5:45pm (conference) and 5:45pm – 10:00pm (networking pavilion)
Where: Yerba Buena Center

It will be awesome. Register today!

Why should you come?
Flash is changing virtualization more than any other technology. With storage no longer in the way the journey to 100% virtualization can be a reality and you can focus on the Cloud operations you need to move to the what is next for your IT organization. Stop letting legacy storage distract you form what can move your business forward. Come to

Provision vSphere Datastores on Pure Storage Volumes with Powershell

A week or so ago our Pure Storage powershell guru Barks @themsftdude sent out some examples of using Powershell to get information via the Pure Storage REST API. My brain immediately started to think how we could combine this with PowerCLI to get a script to create the LUN on Pure and then the datastore on vSphere. So now provision away with Powershell! You know, if that is what you like to do. We also have a vCenter plugin if you like that better.

So now you can take this code and put it into a file New-PSDataStore.ps1

What we are doing:

1. Login to vCenter and the REST API for the Array.
2. Create the Volume on the Flash Array.
3. Place the new volume in the Hostgroup with your ESX cluster.
4. Rescan the host.
5. Create the new Datastore.

Required parameters:

-FlashArray The name of your array
-vCenter Name of your vCenter host
-vCluster Name of the cluster your hosts are in. If you don’t have clusters (what?) you will need to modify the script slightly.
-HostGroup The name of the hostgroup in the Pure Flash Array.
-VolumeName Name of the volume and datastore
-VolumeSize  Size of the volume. This requires denoting the G for Gigabytes or T or Terabytes
-pureUser The Pure FlashArray username
-pureUser The Pure FlashArray  password

# example usage
#.\new-PSdatastore.ps1 -FlashArray "Array" -vCenter "vcenter" -vCluster "clustername" -HostGroup "HostGroup" -VolumeName "NewVol" -VolumeSize 500G -pureUser pureuser -purePass purepass
#On the Volume Size parameter you must include the letter after the number I have tested <number>G for Gigabytes and <number>T for Terabytes
#Special thanks to Barkz www.themicrosoftdude.com @themsftdude for the kickstart on the API calls.
#Find me @jon_2vcps on the twitters. Please make this script better.
# If you do not have a stored PowerCLI credential you will be prompted for the vCenter credentials.
#Not an official supported Pure Storage product, use as you wish at your own risk.

       [string] $FlashArray,
       [string] $VCenter,
       [string] $vCluster,
       [string] $HostGroup,
       [string] $VolumeName,
       [string] $VolumeSize,
       [string] $pureUser,
       [string] $purePass


 Add-PSSnapin VMware.VimAutomation.Core

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = $FlashArray
$vCenterServer = $VCenter
$esxHostGroup = $HostGroup
Connect-viserver -Server $vCenterServer

$workHost = get-vmhost -Location $vCluster | select-object -First 1

$AuthAction = @{
    password = $purePass
    username = $pureUser
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${FlashArrayName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
    api_token = $ApiToken.api_token
Invoke-RestMethod -Method Post -Uri "https://${FlashArrayName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

Invoke-RestMethod -Method POST -Uri "https://${FlashArrayName}/api/1.1/volume/${vname}?size=${vSize}" -WebSession $Session
Invoke-RestMethod -Method POST -Uri "https://${FlashArrayName}/api/1.1/hgroup/${esxHostGroup}/volume/${vname}" -WebSession $Session
$volDetails = Invoke-RestMethod -Method GET -Uri "https://${FlashArrayName}/api/1.1/volume/${vname}" -WebSession $Session
$rescanHost = $workHost | Get-VMhostStorage -RescanAllHba
$volNAA = $volDetails.serial
$volNAA = $volNAA.substring(15)
$afterLUN = $workHost | Get-scsilun -CanonicalName "naa.624*${volNAA}"
New-Datastore -VMhost $workHost -Name $vname -Path $afterLUN -VMFS


Not the Same Ol’ Sessions from Pure Storage at VMworld

I am really excited to be going to VMworld once again. I will be wearing my Orange Nike so most likely my feet won’t hurt quite as bad. Also expect the Pure Orange Superman to make an appearance.
More about the sessions. So I will be attending VMworld San Francisco, and speaking in EMEA.

STO2996-SPO – The vExpert Storage Game Show

The session I am stoked to be a part of is STO2996-SPO – The vExpert Storage Game Show. It will be a fun and informative time about next generation storage architectures presented in the form of a game show.  PLUS,  two members of the audience will join the session to help the vExpert teams. I know everyone will want to be on my team in EMEA.

STO3000-SPO – Flash Storage Best Practices and Technology Preview 

This very exciting session with Vaughn and Cody (super-genius vExperts) will go into what to consider when moving your datacenter to all flash. Plus previews of the Pure VVOLs.  If you think you are not ready for all flash, come to this session and learn how Flashy you can be.

STO2999-SPO – Customers Unplugged: Real-World Results with VMware on Flash

I wish I had thought of this. Customers using All Flash with VMware. All Tech, No Slides.

STO1965 – Virtual Volumes Technical Deep Dive

Dive into Virtual Volumes with Rawlinson Rivera – VMware, Suzy Visvanathan – VMware and Vaughn Stewart – Pure Storage. So many customers have asked me what will VVOLS actually do over the last 3 years. This will be a great chance to find that out.

VAPP2132 – Virtualizing Mission Critical Applications on All Flash Storage 

How does Pure storage enable that final 10% of critical applications that just a few years ago people said would be impossible? Meet my friend Avi Nayek from Pure and Mohan Potheri from VMware and learn how flash eliminates storage as the road block to critical applications becoming virtual.

MGT1265 – Improving Cloud Operations Visibility with Log Management and vCenter Log Insight

Cody Hosterman, Did I tell you he is smart? Yeah. He is. Join Cody and Dominic Rivera from US Bank and Bill Roth from VMware on how to increase your Cloud Operations Visibility.

SDDC2754-SPO – New Kids on the Storage Block, File and Share: Lessons in Storage and Virtualization

Lessons from all the upstarts in the storage industry. Most of them are not “startups” anymore. Finding new ways to solve the issues of using Virtualization with legacy storage. Pure Storage, Nimble Storage, Tintri, Tegile, Coho Data, Data Gravity and moderated by Howard Marks from DeepStorage.net.

STO2496-SPO – vSphere Storage Best Practices: Next-Gen Storage Technologies

The Chad and Vaughn show. Now with Rawlinson Rivera! Storage is changing. Did I say that yet?

More information on Pure Storage Sessions

Coming Soon: Support for VMware VVOLs
Pure Storage set to paint VMworld 2014 orange!

VAAI and XCOPY with Pure Storage

VAAI has been around (almost 4 years now)for a while now and this is one thing I don’t often hear customers or others talking about very often. When your vSphere hosts detect that Hardware Acceleration is compatible. The host will attempt to send VAAI compatible commands to the storage device. As we describe it usually Full Copy is explained as if you need to clone or Storage vMotion a VM the ESXi host issues a command to move the storage device to move the blocks. So when describing this in the past it was a very simple, the Host issue the command and the blocks move. Set it and forget it, right?

Not so fast, my friend!


As good ol’ Lee Corso would say, “Not so fast, my Friend!”

The VAAI Xcopy command tells the storage device to move 4096 KB (AKA 4MB) at a time. So every 4MB is a new command. Not a big deal for disk based xcopy because the blocks could only move from spindle to spindle so fast. Still way more efficient than before but sometimes not actually faster at all.

Along came the Flash Array.

The FlashArray, XCOPY and VAAI


The Pure Storage snapshot technology is used for XCOPY commands. No matter where they are coming from. This results in just a metadata pointer change in order to move the data. The blocks don’t actually move anywhere since they are stored once and mapped in metadata. This enables zero impact snaps and clones that can be created as fast as I can click the button in the GUI.
What does this all mean?
Since the ESXi host is telling the FlashArray to move 4MB at a time the copy function does not reach the full potential of what the FlashArray can really do. It is like using a freight train to move cargo across the country but only putting one box in each car.

Pure Storage recommendation


This is why Pure recommends changing the MaxHWTransferSize (the setting that controls the size of the transfer) to the maximum allowed 16384 (or 16MB).

Default is 4096
Commands to help you change the setting via the CLI

esxcfg-advcfg -g /DataMover/MaxHWTransferSize
Value of MaxHWTransferSize is 4096

Set the transfer size to the Pure Storage best practice:

esxcfg-advcfg -s 16384 /DataMover/MaxHWTransferSize
Value of MaxHWTransferSize is 16384

…but wait there is more!

So the Pure Storage FlashArray is cool with cloning multi TB volumes using xcopy with no impact on performance or space usage. So the question is why only 16MB at a time? (real answer should come from someone way smarter than me at VMware).

I am curious to try out a Storage vMotion or cloning persistent View desktops that fully use the power of the array.
Until then, still better than spinning disk or no VAAI at all.


Changing the vCenter Graphs to Microsecond

So if you are moving your data center to the next generation of Flash Storage you may have noticed your performance charts in VMware vCenter or other tools look something like this.


You start to think, what good is the millisecond scale in a microsecond world? (I know that screenshot is from vCOPS.)

Luckily VMware provided an answer (sorta kinda).

Using microsecond for Virtual Disk Metrics


Go ahead and select your VM and go to Monitor –> Performance and select Advanced.
First change the View from CPU to Virtual Disk(1).
Then select Chart Options(2)


Deselect the Legacy and move on to microseconds.


Then you can select Save Options to use these settings easily next time. The new settings will be saved in the drop down list in the top right corner.


Finally, you have a scale that can let you see what the Virtual Disks are doing for read and write latency.


Disk vs Virtual Disk Metrics

In the vSphere Online documentation the Disk Metric group is described as:
Disk utilization per host, virtual machine, or datastore. Disk metrics include I/O performance (such as latency and read/write speeds), and utilization metrics for storage as a finite resource.

While Virtual Disk is defined:
Disk utilization and disk performance metrics for virtual machines.

Someone can correct me if I am wrong, but the differences I see is even though they are both choices when a VM is selected only the Disk metric gives stats for the datastore device that the VM lives on and can be shown side by side with that VM’s stats but does NOT give the option to change the scale to microsecond if needed. Virtual Disk allows only VM level statistics but permits you to view them as microseconds at least for read and write latency.
Hope this helps.

Twelve Months for a Forklift? Check that, Forever Flash

Recently I was speaking with a potential customer and they were planning on taking 12 months to move from one end of life architecture to latest and greatest from their very big storage provider. Absolutely amazing that customers everywhere have been living with this for years now. Pure Storage introduced a very awesome solution to this issue. Built on the technical awesomeness that a purpose built for flash platform can provide. No legacy to protect so Pure is more than happy to change the way Storage business is done. More on this later.

First Never Move Your Data


Since I am a geek I will start with real production upgrades to your array. Pure can upgrade with no downtime and no performance impact. This is true for software revisions AND hardware upgrades.

Imagine you have the N-1 generation controllers and you want to get all the speed and efficiency that comes with the latest and greatest. Usually you would have to wait to buy an all new array. Use some tool to mirror all the data (if you are lucky) and take a short (if you are super lucky) downtime to move over. Do this for every single host and it could take months. Storage vMotion made this super easy but remember there are still those pesky databases that the DBA never let you virtualize because they don’t want to risk it. One more thing, they can never ever go down. Except when you would rather be at your kids soccer game or something.

Pure Storage allows you to move from controller series older (but still awesome) to series new and shiny (and more awesome) with no downtime, performance still better than you ever had on any $1M boat anchor and get your weekends back.

Now Get the Refresh without the Refresh Quote


Now, imagine getting those new controllers and their inherit boost in performance and efficiency every three years. Just keep your maintenance up to date. Now the conversation dives into OPEX vs CAPEX and resetting contracts and econ stuff I generally don’t cover. Head over to the Forever Flash landing page to dive deeper into what this means. Basically two options exist:

  • Free Every Three – Renew maintenance for 2 more years after year 3 and get the newest controllers.
  • Fresh Every Upgrade – Reset your maintenance every time you buy an upgrade (capacity or compute).

No Mas Forklift


More #ForeverFlash Information

Say it with me, “FOREVER, FOR-EV-ERRRR.”

By the way, that customer came out of his seat with excitement when he heard about Pure NDU and Forever Flash. Awesome.

What happened while getting 100% Virtualized

I often think about how many people have stalled around getting to 100% virtual. I know you are thinking I need to find some fun things to do. You are probably right.

The first thing I thought when I deployed my very first virtual infrastructure project back in the day was, “Man, I want to see if I can virtualize EVERYTHING.” This is before I knew much about storage, cloud, and management. I may be naive but I think there is real potential out there to achieve this goal. There is low hanging fruit still out there depending how you deploy your infrastructure. Having attended VMware Partner Exchange (PEX) I know how the ecosystem is built around your journey to virtualization. The biggest slide to resellers and other partners is the one VMware shows off that says, “Every $1 a customer spends on VMware they buy $9-11 in infrastructure.” Which I fully believe is the reason many customers never saw the FULL cost savings they could have when going virtual.



I believe we all ran into a couple of different kinds of roadblocks on our path. First were organizational. Line of business owners, groups within IT and other political entities made traveling the road very difficult. Certain groups didn’t want to share. Others started to think VM’s were free and went crazy with requests. Finally the very important people who own the very important application didn’t want to be virtual because somehow virtualization was a downgrade from dedicated hardware.

Then if we were able to dodge the roadside problems organizationally, there were technical problems. Remember that $11 of drag? The big vendors made an art of refreshing and updating you with new technology. I know, I helped do it. So performance was a problem? Probably buy more disk or servers. Then every 3-5 years they were back, with something new to fix what the previous generation did not deliver on. This “spinning drag” in the case of storage slowed you from getting to your goal. 100%.



At some point you lose the drive to be 100% virtual. The ideal has been beaten out of you. Well at least my vendor takes me for steak dinner and I get to go to VMworld and pretend I am a big shot every year. This is where you settle. Resign yourself to the fact that everything is so complicated and hard it will never get done. The big vendors make a huge living on keeping you there. Changing the name from VI, to Private Cloud, Hybrid super happy land or whatever some marketing guys that have never opened the vCenter client think of next.



So trying to rebuild Amazon in your data center? Probably lots of other things to fix first. Using more complicated abstraction layers may help in the long run to building a cloud. I see more customers continue to refresh wasteful infrastructure with new infrastructure while they are still trying to figure this out. What we need is a quick an easy win. Make things better and save money right away. Then maybe we can keep working on building the utopian cloud.

The low hanging fruit


When we first started to virtualize we looked for the easy wins. To get you rolling again down the path we need to identify the lowest hanging fruit in the data center. We found all the web servers running at 1% CPU and 300MB of Ram (if that) and virtualized those so quick the app owner didn’t even know it happened. Just like a room of 1000 servers all running at 2% CPU usage there are giant tracks of heat generating spinning waste covering the data center. You had to get so many of them and stripe so wide just to make performance serviceable. You wasted weeks of your life in training classes to learn how to tweak and tune these boat anchors because it was always YOUR fault it didn’t do what the vendor said it would.

Take that legacy disk technology and consolidate to a system made to make sure it is not the roadblock on the way to being 100% virtual. I remember taking pictures of the stacks of servers getting picked up by the recycling people and now is the time to send off tons of refrigerator sized boxes of spinning dead weight. I am not in marketing so I don’t want to sound like a sales pitch. I am seeing customers realize their goal of virtualization with simple and affordable flash storage. No more data migrations or End of Life forklift upgrades. No more having to decide if the maintenance is so high I should just buy a new box. Just storage that performs well all the time and is fine running virtual Oracle and VDI on the same box.

How we do it


How is Pure Storage able to replace disk with Flash (SSD)? Mainly, we created a system from the ground up just for Flash. We created a company that believes the old way of doing business needs to disappear. Customers say, “You actually do what you said, and more.” (Biggest reason I am here). Also, do it all at the price of traditional 15k disk. Not there on SATA, yet.

  1. Make it ultra simple. No more tweaking, moving, migrating or refreshing. If you can give a volume a name and a size you can manage Pure Storage.
  2. Make it efficient. No more wasted space due to having to short stroke drives, no more wasted space because you created a RAID 10 pool and now have nowhere to move things so you can destroy and recreate it.
  3. Make it Available. Support that is awesome because things do happen. Most likely though most of your downtime is planned when it comes to migrating or upgrading code. Pure Storage will allow zero performance hit and zero outage to reboot a controller to upgrade the firmware/code (whatever you want to call it). Pretty nice for an environment that needs ultimate it uptime.
  4. Make sure it alway performs. Imagine going to the DBA’s and saying, “everything is under 1ms latency, How about you stop blaming storage and double check your SQL code?” Now that is something as an administrator I wanted to say for a long long long time.

Once you remove complicated storage from the list of things preventing you from thing preventing 100% virtual you can focus on getting the applications working right, the automation to make life easier and maybe make it to your kid’s soccer games on Saturday.

What do we really need? Cloud? or Change?

Going through the VCAP-DCD material and I had a question. Since it comes with the assumption that everyone is working toward building a private cloud. So I started asking, do I need to build a “cloud” and why? Now don’t think I have completely gone bonkers. I still think the benefits of cloud could help many IT departments. I think more than how do I build a cloud, the question should be what do we need to change to provide better service to the business.

We are infrastructure people


As VMware/Storage/Networking professionals we tend to think about what equipment we need to do this our that. Or how if I could just get 40Gb Ethernet problems XYZ would go away. Often we have to build it on top of a legacy. If we do ever get a green field opportunity it usually needs to be done so quickly we never quite to investigate all the technology we wish we could. There is stuff like All Flash, Hyper-converged things, accelerator appliances, software defined everything all aiming at replacing legacy Compute/Network/Storage.

My last post was about knowing the applications and this is not a repeat of that, but it is very important to for us to look at how our infrastructure choices will impact the business. Beyond business metrics of my FlashArray allows business unit X to do so many more transactions in a day which means more money for the business. What else do the internal customers require from the blinking lights in the loud room with really cold AC.

Ask better questions

  • How does faster storage change the application?
  • What will change if we automate networking?
  • Could workers be more productive if the User experience was better?
  • What are things we do just because we always do them that way?
  • What legacy server, storage and network thought processes can we turn upside down?

This type of foundation enables you to focus on the important things like getting better at Halo. Just kidding. My goal is one day Infrastructure Administrators will get to sleep well at night, their kids will know their names and weekends will once again be for fun things and not Storage, Server or Network cutovers. That is the value of Private Cloud, not that I can now let internal customers self-service provision a VM or application (which is still cool). We gain confidence that our infrastructure is manageable. We have time to work on automating the boring repetitive stuff. You get your life back. Awesome.