All posts by Aaron Margeson

Routing VMotion Traffic in vSphere 6

One of the new features that is officially supported and exposed in the UI for vSphere 6 is routed VMotion traffic.  This blog post will identify what the use cases are, why there was difficulty leading up to vSphere 6, and how vSphere 6 overcomes it.

Use Cases

So why would you want to route Vmotion traffic, anyway?  Truthfully, in the overwhelming majority of cases, you wouldn’t and shouldn’t route Vmotion traffic for various reasons.

Why?  Remember a few facts about Vmotion traffic:

Additional latency, delays, and reduced throughput exponentially reduces vSphere performance.  When a Vmotion operation gets underway, the running contents of memory for a VM is copied from one host to another and changes in the VM’s working set of memory are monitored for changes.  Interative copies continue to copy changes until there is a small enough delta difference that those small differences can be copied very quickly.  Therefore, the longer a Vmotion takes, the more changes in the working set accumulate, and the more changes that accumulate, the longer the operation will take, which invites even more changes to occur during the operation.  Adding an unnecessary hop in the network can only reduce Vmotion performance.  Therefore, if you are within the same datacenter, it is almost certainly the case that routing Vmotion traffic is ill advised at best.  About the only situation I could possibly think this might be a good idea is if you have a very large datacenter with hundreds of hosts, which causes performance deteriortation because of too many broadcasts within a single LAN segment, but you infrequently need to Vmotion VMs between hosts within different clusters.  But you would need A LOT of ESXi hosts that may need to Vmotion between each other before that would make sense.

So when would routed Vmotion traffic make sense?  Vmotioning VMs between datacenters!  Sure, you could stretch the VMotion Layer 2 network between the datacenters with OTV instead, but at that point, you are choosing the lesser of two evils – Vmotioning with a router between hosts in different datacenters, or the inherent perils of stretching an L2 network across sites.  The WAN link will take the bigger toll over an extra hop in the network by far, so there’s no question here the better choice would be to route the Vmotion traffic instead of stretching the Vmotion network between sites.

This is important because cross vCenter VMotioning is now possible, too, and Vmware has enabled additional network portability via other technologies such as NSX, so the need to do this is far greater than in the past, when the only scenario routing Vmotion traffic would make sense is in stretched storage metro clusters and the like.

Why was this a problem in the past?

If you’ve never done stretched metro storage clusters, this may never have occurred to you because there was pretty much never a need to route any kernel port group traffic other than host management traffic.  The fundamental problem was ESXi had a single TCP/IP stack, with one default gateway.  If you followed best practices, you would make multiple kernel port groups to segregate iSCSI, NFS, Vmotion, Fault Tolerance, and Host Management traffic, each in their own segregated VLANs.  You would configure the host’s default gateway as an IP in the host’s management traffic subnet, because you probably shouldn’t route any of that other traffic.  Well, now we need to.  Your only option to do this would be to create static route statements via command line to make this happen on every single host.  As workload mobility increases with vSphere 6 cross-vCenter Vmotion capabilities, NSX, and vCloud Air, this just isn’t a very practical solution.

How does VMware accomplish this in vSphere 6?

Very simple, at least conceptually anyway.  ESXi 6 has the capability of having multiple independent TCP/IP stacks.  By default, there already exists separate TCP/IP stacks for Vmotion and other traffic.  Each can be assigned separate default gateways.

Simple to manage, and configure!  Just configure the stacks appropriately, and ensure your kernel port groups are configured to use the appropriate stack.  Vmotion port groups should use the Vmotion stack, while pretty much everything else should use the default stack.

How cool is that?

vSphere 6 – Certificate Management Intro

I like VMware and their core products like vCenter, ESXi, etc.  Personally, one thing I really admire is the general quality of these products, how reliable they are, how well they work, and how VMware works to address pain points of them to make them extremely usable.  They just work.

However, certificate management has been a big pain point of the core vSphere product line.  There’s just no way around it.  And certificates are important.  You want to ensure the systems you’re connecting to when you manage them are those systems.  For many customers I’ve worked with, because of the pain of certificate management within vSphere, the fact that some customers are too small and don’t have an on premise Certificate Authority, and to ensure the product continues to work, they often don’t replace the default self-signed certificates generated by vSphere.

That’s obviously less than ideal.  The good news is certificate management has been completely revamped in vSphere 6.  It’s far easier to replace certificates if you like, and you have some flexibility as to how you go about this.

Three Models of Certificate Management

Now, you have several choices for managing vSphere certificates. This post will outline them.  Later, I’ll show you how you can implement each model.  Much of this information comes from a VMworld session I attended called “Certificate Management for Mere Mortals.”  If you have access to the session video, I would highly encourage viewing it!

Before we get into the models, be aware that certificates can basically fall under one of two categories – certificates that facilitate client connections from users and admins, and certificates that allow different product components to interact.  Also, vCenter also has built in Certificate Authority functionality within it.  That’s a bit obvious since you already had self-signed certificates, but this functionality has been expanded.  For example, you can allow vCenter to act as a subordinate authority of your enterprise PKI, too!

Effectively, this means you have some questions up front you want to answer:

  1. Are you cool with vCenter acting as a certificate authority at all?  The biggest reason to use vCenter is it is easier to manage certificates this way, but your security guidelines may not allow it.
  2. Are you cool with vCenter being a root certificate authority should you be cool with it generating certificates?  If not, you could make it a subordinate CA.
  3. For each certificate, which certificate authority should generate them?  Maybe your security requirement that the internal PKI must be used is only for certificates viewable on client connections as an example.

From these questions, typically a few models emerge for certificate management.  You effectively have four models that emerge, which is a combination of your vCenter acting as a certificate authority or not, and which certificates it will generate.

Model 1: Let vCenter do it all!

This model is pretty straight forward.  vCenter will act as a certificate authority for your vSphere environment, and it will generate all the certificates for all the things!  This can be attractive for several reasons.

  1. It’s by far the easiest to implement.  It will generate all your certificates for you pretty much, and install them.
  2. It’ll definitely work.  No worries about generating the wrong certificate.
  3. If you don’t have an internal CA, you’re covered!  vCenter is now your PKI for vSphere.  Sweet!  You can even export vCenter’s root CA certificate, and import it into your clients using Active Directory Group Policy, or other technologies to get client machines to automatically trust these certificates!  Note that it is unsupported for vCenter to generate certificates for anything other than vSphere components.

Model 2: Let vCenter do it all as a subordinate CA to your enternal PKI

Very similar model to the above.  The only exception is instead of vCenter being the root CA, you make vCenter become a subordinate CA for your enterprise PKI.  This allows your vCenter server to more easily generate certificates that are trusted automatically by client machines.  Yet it also ensures that certificates are still easily generated and installed properly.

However, it is a bit more involved than the first model, since you must create a certificate request (CSR) in vCenter to submit to your enterprise PKI, and then install the issued certificate within vCenter manually.

Model 3: Make your enterprise PKI issue all the certificates

Arguably the most secure if your enteprise PKI is secured, this model is pretty self-explanatory.  You don’t make use of any of the certificate functionality within vCenter.  Instead, you must manually generate all certificate requests for all vCenter components, ESXi servers, etc., submit them to your enterprise PKI, and install all the resulting certificates for each yourself.

While this could be the most secure way to go about certificate management, it is by far the most laborious solution to implement, and it is the solution that is most likely to be problematic.  You have to ensure your PKI is configured to issue the correct certificate type and properties, you have to install the right certificates on the right components, etc.  It’s all pretty much on you to get everything right!

Model 4: Mix and match!  (SAY WHAT?!?!?)

When I first heard this being discussed in the session, my immediate reaction by my security inner conscious was, “This sounds like a REALLY bad idea!!!”

But as I listened, it actually makes quite a bit of sense when done properly.  You can mix and match which certificates are and are not generated by the PKI components within vCenter.  However, the model that makes sense if you go hybrid (a hybrid solution doesn’t make sense for everyone!) would be to allow vCenter to manage the certificate generation for all certificates that facilitate vSphere component communication, but use either Model 1, 2, or 3 for all other certificates that facilitate client connections.  Should this meet your security requirement, it meets the best of both worlds – certificates issued by your internal PKI that your clients automatically trust and thereby (potentially) more secure, but ease of management and better reliability for all the certificates that clients don’t see for internal vSphere components.

Which should you go with?

I hate using the universal consultant answer, but I have to.  It depends.  If you don’t have an internal PKI, go with Model 1.

If you have an internal PKI just because you had to for something else, and you want easy trusting of vSphere connections by your clients, go with model 1 and import vCenter’s root CA into your client machines, OR go with Model 2.  Which one in this case?  If you don’t consider yourself really good at PKI management, or if you don’t need many machines to be able to connect to vSphere components, probably Model 1.  The more clients that need to connect, the more it might lean you towards Model 2.

Do you have security requirements that prevent you from using vCenter’s PKI capabilities altogether?  You have no choice, go with Model 3.

I would generally try though for people who think they need to go with Model 3 to look at Model 4’s hybrid approach.  Unless you absolutely have to go with Model 3, go Model 4.

Hope this helps!

Long time no blog

Sitting down here to start writing out some blog posts.  Obviously, if you’ve kept up with my blog at all, you’ll know I haven’t been keeping with my schedule of trying to do three blog posts a week.

This is because:

  1. I think three blog posts a week is overly ambitious on my part.  It kinda led me a bit to burn out a bit on blogging, but honestly, it also made me try to churn out posts instead of stuff I want to do more of when I blog – more in depth posts about topics. That didn’t help keep me motivated to do it. So for now, I’m going to try to post twice a week from here on out, but if I feel so inclined, you might see three.
  2. It’s hard to post at VMworld on an iPad.  Or maybe it’s just hard for someone starting out trying to blog regularly to do it while at a conference.  I dunno, but what I do know is I didn’t blog like I wanted to there.
  3. Our dog, Megan, who we had for 14 years, we had to put down.  We don’t have any children, so she was like a child to us in a way.  It was heartbreaking to try to figure out what was wrong, what to do, watching her everyday, etc.  My wife also took it really hard, as they were especially joined to the hip, so to speak.  It wasn’t a quick thing, and I just think, probably understandably, I had zero motivation to blog going through that.  The very next week, my wife had shoulder surgery, so a lot of the day to day stuff she does fell on me for a while, so something had to take a back seat, so blogging was one of them.  Now, I’ve got my proverbial crap together in my head, so I’m ready to go!

So, expect to see some more blog posts you hopefully find valuable coming.

Let’s do this!

VMworld Day 1 – Recap

So much for live blogging VMworld.  I need to find something to post to WordPress from my ipad, as the web editor doesn’t work when the web bandwidth isn’t good…  Actually, the web editor isn’t good on iOS, period.  Oh, well.

Monday was more labs, Solutions Exchange, and sessions.  The general session, VMware stated it’s goal is to make a single logical cloud that could span public and private clouds, where you could run all apps, both enterprise apps we have had for years, and the new “cloud native apps” of today and increasingly in tomorrow.

So most of the 23,000 attendees were greeted with a well produced but a bit weird video that looked like something cooked up by somebody smoking a substance still illegal in most states watching X-Men, as this guy…

cloudprofxWas teaching the young mutant…err…cloud native apps and enterprise apps to hone their powers in security, performance, flexibility, and more.

We learned that we would now be able to vMotion applications between vCloud Air and your private VMware cloud potentially… Cool!

We learned that SRM would now be offered as a cloud offering in conjunction with vCloud Air as well.  Also, very cool!

They also announced vSphere Integrated Containers, and discussed Photon, which is a VMware optimized linux container technology that will interoperate with other container technologies, such as Docker.  It’s good to see VMware embrace a technology that is a bit of a counter to their bread and butter – VMs.  Resisting change is often futile.

Also, an EVO SDDC Manager was announced, which will help automate the management of all components of the Software Defined Data Center, including network virtualization and virtualized storage within VSAN, in a single pane of glass.

Upgrades to VSAN have also been announced, and one of the biggest improvements will be the ability to stretch a VSAN across datacenters, effectively making a stretched storage cluster with synchronous replication.  Considering how much solutions like VPLEX cost to do the same thing, this could potentially be a much lower cost option for organizations looking for this type of DR protection.

I’ll have more on specific sessions later, but I wanted to get this out in the meantime.

 

VMworld Day 0 – Update

Sorry about the late post from yesterday, but I was too exhausted from disembarking from the cruise, getting to VMworld, blah blah blah.

Sunday was a good day to get some quick sessions in, and do a lot of labs.  There’s not enough here to do a lot of posts, so here’s a quick summary of Sunday for me.

  • VMware certifications – Expect VCIX exams for Data Center Virtualization to be available January and February. Design will be first, followed shortly after with Administration.
  • Dell FX line of servers are an interesting piece of hardware.  I’ll do a future blog post about them, but they present an interesting solution for a few scenarios.
  • I played around quite a bit with VSAN in the labs, particularly around policy based management scenarios.  I’m sure that will be another blog post coming soon.

Much more from Day 1 coming…

Most important Active Directory attribute of all!

As I’m writing this, I’m prepping to leave for a much needed vacation, followed by VMworld 2015.  Of course, these have been queued up for release to maintain a three per week schedule, so I’ve been blogging my normal amount plus queuing a bunch for my vacation. So I’m going to indulge on a fun, pointless blog post for once.

I stumbled upon this, and thought it was pretty funny.

Behold, the most important Active Directory attribute of all!  🙂

Expect lots of VMworld goodness starting Monday!

Treadmill desk helpful accessories

In a previous post, I mentioned how much my treadmill desk has changed my life.  I wanted to get that first post out there because I’ve noticed probably because they’re still not that widely used, there’s not a lot of information out there about what to expect, things you might need to go with it, etc.  While this is predominantly a tech blog, I want to help others who are using them, and perhaps they can also share with me anything they’ve found helpful as well.  By all means, I’m not the be all end all expert on a treadmill desk, but I am an early adopter, so I wanted help others who are getting started, too.

I also want to point out that some of these accessories are being recommended due to how much I use my treadmill desk, which is A LOT.  My daily goal is a minimum of 15,000 steps (about 7 miles),  and I try to average over a week about 20,000 steps (about 10 miles), and my personal daily record to date is 35,007 steps, or 17 miles!  That’s a lot of walking!  If you don’t plan to use your desk treadmill that much, some of these may not be needed.

Clothing

Right off the bat, you probably should consider purchasing a few items in this category.  Thankfully, these items are not generally that expensive.  For one, get a dedicated pair of shoes just for walking on the treadmill.  This helps keep your treadmill as clean as possible, and it also allows you to buy the most suitable pair of shoes for walking, even if they aren’t the best looking, or don’t work well for other types of activities.  I tried running shoes that I used exclusively for indoor exercising such as on my elliptical.  I tried a hiking show based on some web research.  I tried a cross trainer from Skechers I had that I love.  I tried some inserts for them all.  I always thought that dedicated walking shoes were just another way for shoe makers to make you buy another pair of shoes.

Friends, I was wrong.  If you walk a lot, and there’s a good chance you will, get some good walking shoes.  After some research, I got some ASICS Gel Quickwalk 2’s, which were around $50.  Before these, I was getting blisters, and my feet were killing me no matter which of the above I tried.  These were totally worth it, and I highly recommend them.

One shoe I do not recommend at all – any Crocs!  I had some old Crocs I barely used just for walking back and forth to the mailbox, so I cleaned those up, and tried them.  They were extremely comfortable and stopped the blistering.  I loved them, but unfortunately, the tread at the balls of my feet wore out in about 3 weeks.  I thought maybe they were on their way because I’d had them for almost 10 years, so I bought a brand new pair made for hiking.  Within two weeks, you could see the same thing was going to happen.  Treadmills eat Crocs for dinner.  Don’t bother.

You may also want some alternative fabric clothing from cotton.  Exercise shirts, shorts, and underwear made from significant portions of polyester, spandex, and other fabrics prevent sweating, chafing, etc.

For those of you like me who will sweat embarrassingly a lot no matter what physical activity you do, even slower walking you typically do on a desk treadmill, I found one other helpful accessory – the Halo headband.  It’s an alternative fabric sweatband with a rubber sweat barrier that prevents sweat from running down your face.  If you like them, buy two to rotate out while the other is getting washed.

Drinkware

Make sure you get something that’s dishwasher safe since you’ll use it a lot, has a good washable straw and cap, and contains plenty of liquid.  You’ll be drinking a lot, and it’s way too easy to spill a drink on your expensive treadmill or desk while walking, so the lid and straw are essential.

I love Tervis Tumblers.  Get a few big ones with caps and straw, and they’re also great because they keep your drink cold and don’t drip condensation.  They’re expensive, but IMO completely worth it.

Computer Stuff

Especially because of my two bad disks in my neck, ergonomics is very important to me.  IMO, it’s mandatory that you get displays that are raised up ergonomically, so do whatever is required for that, which usually involves VESA mount compatible LCD panels and some monitor mounts that allow height adjustments to be at about slightly below eye level.

www.monoprice.com is great for some lower cost options.  If different people will be using these monitors on your treadmill desk, make sure you select monitor mount options that can easily adjust on the fly.

I also strongly believe an ergonomic keyboard is a must.  I can’t imagine typing while walking with a conventional keyboard.  I used to use the quite affordable Microsoft Natural Ergonomic 3000 wired keyboard because of their price, so I could throw them in the dishwasher to clean them, and if they died, it wasn’t a big deal, but the newest versions of these keyboard have quite honestly horrible action.  I recently changed to a wireless Microsoft Sculpt Ergonomic keyboard, and I love it.  My only issue is when you rest your palms on the front rest, it can tip the keyboard towards you.  I got some cheap rubber anti-skid stickers, and popped those on the front edge of the keyboard, and that corrected that problem.  The action on this keyboard is as good as you’ll get without getting something with mechnical buttons.

Use whatever you like for a mouse, but I used to use one of those hard mousing “surfaces” PC gamers often like.  However, there’s so little friction that walking tends to cause you to move your mouse ever so slightly with them, so I switched back to a high quality gaming mouse fabric type pad from Steelseries, and that works much better.  Now I can walk even when I play first person shooters on my PC.

You Gotta Sit

I know, this sounds weird, you get a treadmill to walk, so why would you ever want to sit?  I had false images of grandeur of walking all the live long day while I worked.  Look up at those walking numbers.  That seems like a lot, but that’s not walking 8 hours a day usually.  You do need to sit from time to time, and believe me, it is simply not practical to move the treadmill out.  And at least for my office chair, the deck wasn’t wide enough for the chair wheels to sit flat, not to mention the wheels would probably be horrible for the treadmill belt anyway.

I first tried a friend’s recommendation to put an exercise ball on top of the treadmill and use that.  It’s better for you than sitting in a backed chair, as it promotes better posture and strengthens your core muscles.  Plus, it’s dirt cheap for a chair!  Awesome!

I tried to make it work.  I gave it a good solid month, but in the end, I absolutely couldn’t stand it, and I ended up slumping and putting my elbows on the table to rest from all the walking, which hurt my back.  It did motivate me to get back up and walk, but honestly, I was motivated to walk anyway.

I finally had a friend of a co-worker who does wood working build me a platform to put my chair on.  I’ll post about it in the future, but it’s something you may need to consider in the meantime.

PS.  Somebody should totally do a Kickstarter campaign for an easy to assemble solution for that.

Activity Tracker

The Lifespan TR1200-DT3 treadmill does come with a step counter, with calories burned, steps walked, and distance right into the treadmill.  It even supports bluetooth connections to your smartphone and what not, but the software and interface quite honestly suck.  It’s one of the few things that just plain don’t work well unless you manually write down from the unit what you’ve done.

If you want to track your walking, I would recommend getting whatever activity tracker you like.  My wife uses an inexpensive Jawbone Up Move, which works well.  I use a Lumo Lift, since it tracks steps and buzzes at me to notify me if my posture isn’t good, which helps my neck.  Choose whichever one works for you to easily track your steps and what not.

Miscellaneous

I had no idea how hot treadmills get until I put one in my already hot office.  My office was bad enough before I put this thing in here.  It’s one office with one doorway, three windows, my workstation with 4 monitors, my wife’s PC with three monitors, a PC running Windows Home Server, an Iomega IX-4 for a lab NAS, a router, a switch, a FIOS cable modem, and a partridge in a pear tree.  It’s also upstairs.  It’s now the summer time.  Just being upstairs adds 5F to the average temperature since I don’t have dual zones.  This office was another 3F before the treadmill even runs.  Ceiling fans only do so much.  When this thing is going, I tried putting a desk fan blowing in my face, but that irritated my eyes, and it wasn’t enough anyway.  We already had a portable AC unit for this room, so it has to get cranked down even more when I’m walking.

Also, due to all the computer equipment, I was already borderline for power.  The treadmill was the straw that broke the camel’s back, so I had to hire an electrician to run another line up.  Otherwise, doing so much as running a vacuum cleaner upstairs tripped the breaker for most of the upstairs power.

Obviously, everyone’s situation is different for both of those.

Also, if you sweat like I do and need a way to listen to sound on your computer privately, consider some kind of earbuds that are sweat friendly.  I had some big headphones I absolutely loved to listen to music with at my computer because they were so comfortable and sounded good, but with the heat in the room, it was like jogging in the summer with super insulated earmuffs on.  They now live at my wife’s desk.

So there you have it, my list of accessories to look into for your treadmill.

Have you gotten a desk treadmill?  What accessories would you recommend?

Why don’t people use stub zones?

This is one of the more odd things I’ve noticed as a trend in DNS configurations – clear examples of where stub zones should be used, yet I rarely ever see stub zones in environments except for the ones I set them up for.  I suspect it may be because there’s so much widespread misunderstanding of what they are, so people don’t use them, even when they should.  Hopefully, this post might clear up what stub zones are, how they work, and when to use (and not) use them.

What are stub zones?

Stub zones are DNS zones that contain only the SOA, NS, and A glue records for a domain.   Otherwise, they don’t store any other records, such as other A records, PTR records, MX, SRV, TXT, etc.  They’re used to help facilitate name resolution for other domains your DNS servers must resolve that your DNS servers don’t host. Once a stub zone is created on a DNS server, the DNS server connects to the identified DNS server it was told to get the NS, SOA, and A glue records from, and copies down all those records only.  Now, the DNS server knows where to get the name servers responsible for resolving forward lookups for the external domain, so they go there automatically to resolve all records within that domain’s forward lookup zone.  They were added as a feature to Windows 2003 DNS servers to facilitate cross domain DNS name resolution.  You can read Microsoft’s official answer here.

Are stub zones just DNS conditional forwarders/forwarding?

While stub zones and DNS forwarders share the same purpose, they’re not quite the same thing.  DNS forwarding is a simple rule on a DNS server that states “connect to these DNS servers to resolve IP addresses for domain whatever.com”.  They’re static rules that don’t automatically update.  Until Windows Server 2008 and higher, you couldn’t automatically configure all your DNS servers in your domain to use the same forwarding rule, but 2008 did add this ability.  Another important distinction is that forwarding does NOT store DNS zone records, while stub zones do, but don’t let that confuse you from the fact stub zones and forwarding accomplish the same goal, just differently.

Stub zones can also propagate settings effectively like DNS forwarding does with Windows 2008 depending upon how you set the zone up to be stored.  If you use to make the stub zone Active Directory integrated, the zone is stored in AD, and is replicated to at least all the domain controllers in the domain where you created the stub zone, and potentially through the forest.  The key difference between the two in the end as far as functionality is concerned is that stub zones have the distinct advantage of automatically updating what the DNS servers are for the other domain, so long as the administrators of the other domain keep the NS, SOA, and glue A records updated properly.  With forwarding rules, whenever a DNS server for the external domain is added or removed, you must update your forwarding rule, but that’s not the case with stub zones.

When should I use stub zones, and when should I use forwarding?

First off, stub zones are not useful for resolving broadly all internet DNS names.  You use a catch all forwarding rule typically for this, or root hints.  Stub zones (and conditional forwarding for that matter) typically are for situations where you want to resolve DNS names that aren’t on the internet.

With that said, between the two, stub zones are the better choice, provided your DNS environment meets the following:

  • All your DNS servers can connect to all the external DNS servers for that other domain.
  • There’s no significant advantage to have some of your DNS servers consistently connect to varying DNS servers for the other domain.  For example, if you had let’s say two DNS domains internally for domain1.local and domain2.local, and you had two physical sites, with DNS servers for both domains in both sites, if there’s a compelling reason that DNS servers in site 1 always connect to the DNS servers for the other domain also in site 1, stub zones are not the best solution, because you can’t within a stub zone use such rules to dictate which DNS servers identified stored in the stub zone to use.  Your DNS servers will use any DNS server for the other domain.  In this day and age, DNS traffic isn’t exactly eating up bandwidth, and remember that DNS records are cached anyway, so unless you have a bunch of records with low TTL, this generally doesn’t matter.

Why aren’t stub zones used more then?

I think honestly people just know conditional forwarding works, they understand how it works, so they use it instead, even when stub zones would be the clearly better choice.  I only point out that if name servers may change either by adding or removing them from the external domain, you have to keep on top of that, whereas stub zones would automatically update in those events.  The advantage of stub zones increases the more external domains your DNS servers must resolve other than through internet DNS servers, the more the external domain’s DNS servers change, and the more segregated the management of the DNS servers between the domains are.  For example, if domain1.local’s DNS zones are managed by a different team than domain2.local’s DNS servers, either domain’s admins might not remember to tell the admins of the other domains that DNS servers have changed.  Stub zones would have automatically done that.

Yet, stub zones are consistently the redheaded stepchild in DNS design.  But don’t forget about them.  They’re extremely useful, and we should look to use technologies that can help automate our environments.

What about you?  Do you use stub zones?  Why or why not?

NetApp snapshots and volume monitoring script

I just finished a script I created for a customer to help them resolve a problem with their NetApp.  Basically, sometimes their NetApp snapshots would not purge and get stuck, and/or the volumes would run out of space.  I advocated to them many times that if there isn’t a monitoring solution in place to detect this, PowerShell could fill in the gaps.  They took me up on getting something setup because this had happened too often.

First, you need to download the NetApp DataOnTAP PowerShell toolkit and install it.

This script detects any volume with less than 90% free space, and any volume snapshot older than 14 days, which are customizable easily via the variables. Finally, it offers to delete the old snapshots while you’re running the script.

$maxvolpercentused = '90'
$maxsnapshotdesiredage = (get-date).adddays(-14)
import-module dataontap
Write-Host "Enter a user account with full rights within the NetApp Filer"
$cred = Get-Credential
$controller = 'Put Your NetApp filer IP/name here'
$currentcontroller = connect-nacontroller -name $controller -credential $cred
Write-Host "Getting NetApp volume snapshot information..."
$volsnapshots = get-navol | get-nasnapshot
Write-Host "Getting NetApp volume information..."
$vollowspace = get-navol | where-object {$_.percentageused -gt "$maxvolpercentused"}
if ($vollowspace -eq $null){
 Write-Host "All volumes have sufficient free space!"
 }
else {
 Write-Host "The following NetApp volumes have low free space, and should be checked."
 $vollowspace
 Read-Host "Press Enter to continue..."
 Write-Host "Getting volume snapshot information for volumes with low space..."
 $vollowspace | get-nasnapshot | sort-object targetname | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}}
 Read-Host "Press Enter to continue..."
 }
Write-Host "Checking for snapshots older than the max desired age of..."
$maxsnapshotdesiredage
Write-Host "Finding old snapshots..."
$oldsnapshots = get-navol | get-nasnapshot | where-object {$_.created -lt "$maxsnapshotdesiredage"}
if ($oldsnapshots -eq $null){
 Write-Host "No old snapshots exist!"
 }
else {
 Write-Host "The following snapshots are longer than the identified longest retention period..."
 $oldsnapshots | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}}
 Read-Host "Press Enter to continue..."
Write-Host "You will now be asked if you would like to delete each of the above snapshots."
Write-Host "Note that Yes to All and No to All will not function.."
Write-Host "If you elect to delete them, it is NON-REVERSIBLE!!!"
$oldsnapshots | foreach-object {$_ | select-object targetname,name,created,@{Name="TotalGB";expression={$_.total/1GB}} ; $_ | remove-nasnapshot -confirm:$true}
 }
Write-Host "Script completed!"

The resulting output looks like this.

Enter a user account with full rights within the NetApp Filer

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential
Getting NetApp volume snapshot information...
Getting NetApp volume information...
All volumes have sufficient free space!
Checking for snapshots older than the max desired age of...

Monday, July 27, 2015 11:18:58 AM
Finding old snapshots...
The following snapshots are longer than the identified longest retention period...

TargetName : NA_NFS01_A_DD
Name : smvi__Daily_NFS01_A_&_B_20120621171008
Created : 6/21/2015 4:59:16 PM
TotalGB : 50.8708076477051

Press Enter to continue...:

Script completed!

Hope this helps someone out there!

Road warrior portable monitor

About 13 years ago, I added a second monitor to my home machine, and ever since then, 2 monitors has been a minimum for me when working on a computer without getting mildly annoyed.  It’s so useful to view two full screens simultaneously.  If you’ve used multiple monitors at any length of time, you know what I’m talking about, it’s just hard going back to using one.  In fact, I run 4x 23″ 1080p monitors at my house.

The problem is of course when you’re onsite, or on the road.  Pretty good chance customers don’t have an extra monitor to use, or they look at you weird if you’re brave enough to ask for one, never mind the fact your hotel room won’t have one if you’re out of town.

For me, since I try to travel light, it’s especially worse when I go to a single laptop screen.  Honestly, a laptop with a bigger screen doesn’t help.  Being able to reference something on one screen while working on another is something I’m so used to now; it’s hard to function efficiently when I don’t have it.  I need a portable monitor!

I know, First World Problems.  But First World Problems demand First World Solutions!

Check out the Asus MB168B+.  Make sure it’s the + model, because that’s the 1080p one.  15.4” USB monitor weighing 1.76 pounds, which is astoundingly light for its dimensions.  As a reference, the ipads prior to the air models are about 1.3-1.5 pounds depending on the model.

http://www.asus.com/us/Monitors_Projectors/MB168BPlus/

Comes with a carrying case that doubles as the monitor stand.  It powers off the USB cable, so that also keeps the weight down, although reviews I’ve read said on some computers like the Surface Pros, it’s best to get a USB Y cable, as they often can’t power the monitor alone.  It actually powers perfectly fine off the single USB port on my Surface Pro, but it does fail if I plug in my non-powered USB ethernet/USB hub dongle with an external hard drive drive plugged in without using the Y cable.  It fits easily in my backpack.  Solid picture quality, although it’s a just a bit sluggish.  But definitely good enough to be productive on it, even drawing Visios.

So if you’re like me and want multiple monitors even at a customer’s site or on the road, check this thing out.  Absolutely loving it!