Saturday, October 3, 2020

Using VM Templates and NSX-T for Repeatable Virtual Network Deployments

So far, we've provided the infrastructure for continuous delivery / continuous integration, but it's been for those other guys.

Is that odd?

Let's try using the principles provided for more infrastructure-oriented reasons. Let's build a network lab using NSX-T.

First, we need some form of a mutable router. Normally, that'd be whatever flavor's "in production," but the specific implementation doesn't really matter. 

First, we need to outline what basic functionality would need to be in place for this basic image to work:

  • Management Plane isolation: Build a separate "routing table," or VRF for the first applied interface.
  • Automatic connectivity. We should have some way to automatically get network connectivity separate from the "data plane," and perform configuration loading, command invocations, and software lifecycle management. 
  • Enable inbound management protocols.
I have built a light configuration to do that here.

Once operational, we will want a good process to keep software up-to-date. Once established with this basic configuration, it'll be possible to SSH into this device and run the update process. Here's how:

vyos@vyos:~$ add system image https://downloads.vyos.io/rolling/current/amd64/vyos-rolling-latest.iso vrf mgmt
Trying to fetch ISO file from https://downloads.vyos.io/rolling/current/amd64/vyos-rolling-latest.iso
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  309M  100  309M    0     0  1424k      0  0:03:42  0:03:42 --:--:-- 1551k
ISO download succeeded.
Checking for digital signature file...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (22) The requested URL returned error: 404 Not Found
Unable to fetch digital signature file.
Do you want to continue without signature check? (yes/no) [yes] yes
Checking MD5 checksums of files on the ISO image...OK.
Done!
What would you like to name this image? [1.3-rolling-202010020117]:
OK.  This image will be named: 1.3-rolling-202010020117
Installing "1.3-rolling-202010020117" image.
Copying new release files...
Would you like to save the current configuration
directory and config file? (Yes/No) [Yes]: Yes
Copying current configuration...
Would you like to save the SSH host keys from your
current configuration? (Yes/No) [Yes]:
Copying SSH keys...
Running post-install script...
Setting up grub configuration...
Done.
vyos@vyos:~$ show system image
The system currently has the following image(s) installed:

   1: 1.3-rolling-202010020117 (default boot)
   2: 1.3-rolling-202009200118
vyos@vyos:~$ reboot
Are you sure you want to reboot this system? [y/N] y

...

vyos@vyos:~$ show system image
The system currently has the following image(s) installed:

   1: 1.3-rolling-202010020117 (default boot) (running image)
   2: 1.3-rolling-202009200118

vyos@vyos:~$ delete system image
Possible completions:
  Enter       Execute the current command
  1.3-rolling-202009200118
                Name of image image to delete
  1.3-rolling-202010020117

vyos@vyos:~$ delete system image 1.3-rolling-202009200118
Are you sure you want to delete the
"1.3-rolling-202009200118" image? (Yes/No) [No]: Yes
Deleting the "1.3-rolling-202009200118" image...
Done
Ta-da! new version! We cleaned up the old image for disk space compaction as well.

Our virtual router is built - let's shut it down, and then convert it to a template:

Ready to go!

Sunday, August 16, 2020

Why Automate? Writing a self-testing Python class for REST or XML API invocation

 So far, most API invocations, at least in terms of what you need to do, are pretty simple to execute.

Then again, just about every other administrative function on a computer is, as well. For example:

  • Clicking a button
  • Typing in a command or variable
Interacting with a programmable interface is as simple as any other interaction with a computer.

The primary goal with an API is not to simply replace any of those functions normally performed by a user. Using a programmable interface effectively skips most of the rigmarole performed by a skilled administrator, like:
  • Ensuring the change is correct
  • Ensuring the change is appropriate
  • Ensuring that the change won't have unexpected impacts
As an example, when you enter a vehicle to back it out of a driveway, you achieve these goals:
  • Correct: You ensure that you are entering the right vehicle, in the right driveway, and will head in the right direction.
  • Appropriate: You typically do not perform this action if you have no need, but you also don't use someone else's vehicle without permission, drive inappropriate speeds, or take unnecessary steps that could endanger life
  • Performs as expected: People are generally more unpredictable, so the analogy falls apart here. But generally, people get where they intend to go while driving.
While most people don't always fully realize that they're performing these steps, each is typically present. We see many instances in the industry where engineers are considered "unreliable". In my experience, these individuals just aren't aware of those steps, and simply need to make it a conscious effort.

This has to be a fully conscious effort when developing software or automating changes. While a programmable interface does not perform these things automatically, we can do them ourselves relatively easily, given the right tools.

Let's cover this in micro first - and cover the concept of unit testing.

Unit testing is based upon the principle, that, for every individual thing you can do programmatically, you should at least test once.

The website Software Testing Fundamentals actually covers unit testing itself much more thoroughly than I will here, as this is geared more towards immediate practical applications for people who don't exclusively write code for a living.

This is step one to ensuring that programmatic changes are correct, appropriate, and won't have unintended side effects or at least ensuring your infrastructure won't end up on r/softwaregore

For this to work, every single software function executed must be proven just like any other mathematical formula. Typically, the easiest way to do this from a pure mathematics standpoint is by trying the formula in reverse.

I'll be honest, this doesn't scale particularly well when dealing with infrastructure programmability. We used to joke in college that physicists and mathematicians would start with "assuming a cow is a sphere at absolute zero in a vacuum," but we didn't really understand that yet. The joke was probably inherited and re-used, where:

We, as engineers designing infrastructure, have limited time and resources to tackle the fractal complexity of what we consider the "real-world." 

Infrastructure designers and maintainers live somewhere between the two, where software is based on mathematics but is slowly approaching the fractal complexity of the "real world."

So, we rip off what other engineering disciplines have done for millennia, component testing.

Typically, engineers test a component based on results, or by breaking a component. Some examples of where these approaches are practical are:
  • Mathematical proofs and sanity checks: Generally, if you ask for a fraction, you want a fraction. If you ask for a boolean, you want true or false. If you ask for a routing table, you probably don't want a VPN client table.
  • Simulations: Run the code against simulated production systems, remembering that machines don't really mind ugly levels of repetition. Sample sizes of less than 100% on individual tests are impossible in the real world, so test coverage stays in the low percentages and is later found statistically representative. We're not really burdened by this here!
  • Fuzzing: Intentionally feed garbage input, give a piece of software 
Third-party tools like pipelines can cover automated test EXECUTION, but before we cover that, we need to cover how to test, and better yet, how to bake testing in so that it doesn't take much effort.

If you already have a library that you're re-using to execute changes, you're handing off responsibility for mathematical proofs, but as the person executing a change, you still have operational responsibility for any unintended effects. So you treat this as an engineer, and move forward with simulations and fuzzing.

Let's start by creating a Python class. PEP 8 - the style guide for writing python code, has a lot to say about names. I'll call this one IronStrataReliquary, for the following reasons:
  • CapWords: This is just what PEP 8 agrees is convention compliant. 
  • Obvious: 
    • Iron is a common prefix for Palo Alto coding projects - it's a portmanteau derived from a common acronym for Palo Alto Networks (PAN). 
    • Strata is the currently rebranded signifier for Palo Alto's NGFW or "Core" Product line. this delineates from Cortex or Prisma.
    • We love things in threes. Reliquary is a thing that holds relics - I picked this because the word "toolbox" was too derivative
  • Unique: We want to package this class as an installable, and if the name conflicts with existing software, it's typically because of a class name. 
From here, we structure the class by illustrating a rough outline for what the class should contain:
  • Initialization: This is not a C Constructor. This is effectively a script or function to bootstrap an object. In our case - __init__ contains or initial connection to a Strata appliance, and will prepare it for immediate use. 
  • Variables: I am storing API XML responses against variable names in a table:
    • Name: What you'd find it by, annotated with the version first tested against
    • XML Query string: This is you asking the API for something
    • XML Response string: This is what a normal response should contain, in some form. See how easy this is?
  • HTTP Errors: Just a quick one - I didn't create it. I added in HTTP errors that an NGFW can throw as well.
  • API GET/POST functions. Feed this XML, they'll send it to an NGFW.
  • Data conversion: Interpret HTTP errors, convert XML to JSON, etc.
All I have to do, once done, is write an exceedingly simple script to test this out:

Since the array in question already stored expected responses, I'm able to apply a for loop and just iterate through all of the provided XML Queries and responses to test the code with nearly full coverage. After I've finished the rest of the PEP 8 / Code conformance, the last remaining work is to:
  • Explore the API and add more variables/responses
  • Export strata_bibliotheca to a JSON file for easy management outside of the Python class.

Sunday, May 24, 2020

Why Automate, Part 2: RESTFul APIs and why they aren't as hard as you think

Let's be realistic about the API craze - it seems everything has one, and everybody is talking about API consumption in their environment as if they've invented fire.

Here are a few things to know about APIs that could have been communicated better:

  • Writing code to consume an API is easy. Most of the time, a cURL command will do what you need. To top it off, most platforms have a Swagger UI, or even better, an API Sandbox to guide you through it.
  • You have to write code to consume an API. Most of the time, you're simply buying a product that does this for you. For example, with ArubaOS all management plane traffic uses PAPI to communicate, and you just interact with the controller. Even better, platforms like Ansible and Hashi's Terraform make it as easy as defining what you want in a YAML file.
  • APIs need to be secured. As a security practitioner, this one is pretty scary. Think of an API as your SSH connection, but with less baked-in security controls, because the industry hasn't hardened m(any) of them yet. API proxies are really useful here because you can limit what permissions any given client can have.
  • APIs are useful in ways that the CLI isn't. There are features and advantages to performing work via any API - one of which is platform abstraction. You can easily write code to make changes to a Juniper switch as a Cisco guy, just by learning the automation constructs!
  • If you're sick of PuTTY/(insert SSH client here)'s bulk copy issues, the API is for you. Even if you don't want to regularly use an API for most things, bulk changes are typically authenticated and validated and will tell you where any breakage is. Next time you install a few hundred static routes, import multi-line ACL, try it. How do you validate that those changes went in today? Have you ever had issues with just one missing line when doing those bulk imports?
Let's try and consume an API with base code - just to see how easy it really is.

First, let's try something easy, adding a few hundred static routes to an NX-OS device. The main reason why I'm using NX-OS here is that the platform includes an "API Sandbox" by default, which should be disabled in production environments:

no nxapi sandbox

That being said, we're using a lab, and it's stitched together via NSX-T. We can firewall, IDS, etc. the management and data plane of any simulated network asset, and connect them as arbitrary topologies to fit our needs really easily. These workloads (virtual routers & switches) should be ephemeral, so it should be OK for now. Later I'll go into automatically securing and loading base configurations.

Let's get started! Here's the NX-API Sandbox:
I generated an IP list of /32s starting from 1.1.1.1/32 up to 1.1.2.44/32 as null routes, with individual tags, and applied it accordingly. Then I set the format to JSON, mode to cli_conf, and set the error action to "rollback on error". this would convert everything into a common language, and roll back a change if there are problems.

Generated code is here.

First, we check the routing table beforehand:
sho ip ro
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

Then we run it:
python3 nxapi-add-bulk-routes.py

And then we verify. 
show ip ro
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

1.1.1.1/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1111
1.1.1.2/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1112
1.1.1.3/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1113
1.1.1.4/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1114
1.1.1.5/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1115
1.1.1.6/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:15, static, tag 1116
1.1.1.7/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:14, static, tag 1117
1.1.1.8/32, ubest/mbest: 1/0
*via Null0, [222/0], 00:00:14, static, tag 1118

We can also roll back (script in GitHub):
python3 nxapi-rollback-bulk-routes.py

And verify:
show ip ro
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

Just to be clear, this is a starting point. There is no error handling, no automatic validation, no secure storage of credentials. It's fantastic that Cisco and other vendors provide this, but there are quite a few things that should be improved with just a tiny bit of coding time:
 - User-friendly formatting of the payload. You'll want to prettify the payload blob, so that it's easier to peer review.
 - try-catch statements: You want to, at a minimum, get a 200 OK or 400 Failure of some kind, and report it to the executor of your script. This is pretty easy to capture.
 - Automatic change validation: In this example, capturing the routing table after the fact could also be generated automatically by the sandbox, and would make for the perfect validation step. Be creative!
 - Test Test Test: These API calls go by pretty quickly, and you don't have the typical MOP approach where constant validation is taking place. Get a lab, and thoroughly test your automation before using it on a live network.

I'll be adding another example that incorporates these values at a later date.

My examples of automation implementations are here.

Cisco's library of NX-OS examples are here.

Sunday, March 15, 2020

IPv6 Sage Certification with NSX-T, Part 2

To get past the first major test (Explorer), you simply need to access a page over IPv6, and pass a quiz. To do this, spin up a desktop VM on your dual-stack vn-segment and navigate to https://ipv6.he.net/certification/

To get past your next phase (Enthusiast) you do have to spend some money - purchase a domain (the cheaper, the better) and link it to he.net's name servers. Jacob Salmela has a pretty good step-by-step on this: (https://jacobsalmela.com/2013/10/30/ipv6-certification-walkthrough-enthusiast-level-hurricane-electric-part-3/)

From here, you should be able to get through it via trial and error. I recommend just spinning up a linux VM on that vn-segment and toying around with it, e.g. installing apache, postfix, etc.

One thing worth noting is that the last few phases (Professional on up) have automated tests that may need to be manually restarted by HE to work. If you get really stuck, you can ask them at ipv6@he.net.

IPv6 Up and Running - Dual-Stack connectivity with NSX-T

The next step is to get IPv6 up and running with NSX-T!

This should be pretty short - as with existing deployments of NSX-T, most of the difficult work is already completed. Here are a few preparatory steps to be performed before getting started:
  • Ensure MP-BGP is on and that the data center fabric is running the ipv6-unicast address-family.
  • Ensure the same on NSX-T manager by navigating to Advanced Networking & Security -> Networking -> Routers -> Global Config:
Now, let's review feature support (up to date as of NSX-T 2.5), as it's not really in the NSX-T documents. More detail can be found here

  • Routing
    • IPv6 Unicast AFI
    • eBGP and iBGP
    • ECMP
    • BGP Route Aggregation, Redistribution, tuning
  • Dataplane forwarding
    • Route Advertisements
    • Neighbor Discovery
    • Duplicate Address detection
    • DHCPv6 helper
  • Security
    • Full Layer 4 firewalling
    • IP Discovery/Security, e.g. IP spoofing prevention, DHCPv6 spoofing prevention
We're pretty much covered on the data plane portion, with one notable exception - IPv6 load balancing is not supported. Other things that are not supported include:
  • IPv6 native underlay: VTEPs, Controller-to-host communication is IPv4 only. I'd expect this to be resolved relatively soon...
  • NSX Manager cannot have an IPv6 address, nor can it cluster via IPv6
  • vCenter and ESXi still does not fully support IPv6. Additionally, with the deprecation of the FLEX UI, the experimental feature that allowed you to try is no longer exposed via any GUI.
  • Versions of vRA prior to 8.0 don't appear to support IPv6 autoconfiguration, so it may be a while before you can automatically invoke these features.
Now that I've been a total buzzkill on feature support (VMWare historically hasn't been great on this front), let's get to configuring!

First, let's configure an IPv6 address on our Tier-0 routers:
Add BGP Peers:
Note that you already have Tier-0 to Tier-1 automatically set up - click "View More" under router links, and you'll see it's using the prefix fcc4::which is currently reserved by RFC4193 for Unique local connectivity. Props to VMWare for following spec!
There actually isn't much else to do here - you're done. You can add IPv6 subnets and profiles to segments really easily:

And that's it! Interestingly enough, you can run IPv6 only on NSX-T vn-segments as well - just create a new external interface, attach it to the VyOS VM via a vn-segment, and peer BGP.


Saturday, February 15, 2020

IPv6 Sage Certification with NSX-T, Part 1: Requesting an extended prefix

As is probably obvious from the sidebar, I'm pretty enthusiastic about IPv6 - for quite a few reasons, not least of which is implementing a new Layer 3 protocol after guys like Vint Cerf already did most of the cool stuff.

However, I didn't want to simply complete this task - most people complete all of these tasks without properly implementing IPv6 - no routing, network configuration is required if you simply install a tunnel client on your computer and work from there.

So instead, let's introduce a lot of complexity and make it easier for the testing to fail.

First things first, since we have a whole network in play instead of a single Layer 2 domain, we need to request a bigger prefix. Since you can't (shouldn't) chop up a /64 for end devices, let's start with establishing a larger prefix. HE.net's tunnelbroker site lets us one-click request a /48:
So I'd recommend doing that - and from there we'd want to modify the tunnel created in my previous blog post, and chopping it up as you see fit.

I already have a dual-stack Clos fabric in my lab, so establishing tunneled connectivity here was trivial - standing up a VyOS virtual router (config here) and peering BGP with the fabric. This is pretty much the upside to Clos fabrics - you have flexibility in spades.

Saturday, February 1, 2020

Why Automate, Part 1: Network Config Templating in Jinja2

Let's answer the big question: "What's the answer to the ultimate question of life, the universe, and everything?"

Kidding, it's easier to cover the question: "Why automate?"

So let's get started! Here I'm going to start a few easy and quick ways to benefit from automation, with a slight networking bias...

File Templating


Have you ever deployed a single-config device (doesn't have to be a router or switch) and encountered copy-paste errors, adding old VLAN names, from some master config  (ideally) or other devices (not ideally)? 

As it turns out, so many developers ran into this issue that they created a parsing language specifically for purposes like this - Jinja2.

When visiting their website, the API documentation can be a bit overwhelming. There are many features for single-file templating, but if your goal is to cookie-cutter generate device configurations, you don't need to learn all that much of it, as Ansible takes care of the vast majority of the coding required. That's right - no coding required.

The Basics


Jinja2 file templates emphasize the use of variables, and escape them with double curly brackets, for example:
hostname {{ hostname }}

As a language, it also supports a hierarchy of variables:
hostname {{ global.hostname }}-{{ global.switchid }}

This is pretty simple, right? The first step I'd recommend here is to go through any configuration standards you have and highlight all of the variables in it.

Now to add a little bit more difficulty - it's time to define the variables in a document, to eventually combine together with the Jinja template we're creating. This is incredibly difficult to do in a vacuum, as you need a good way to name/organize the variables. So let's take that highlighted document, and start attaching names / organizing them at the same time. I'd recommend using a text editor that supports multi-document editing, putting your variable list on one side, and your Jinja template on the other. Here's how I did it in Visual Studio Code:
As you can see, on the left I have used YAML to define attributes of a leaf switch, while adding the names into the template itself. I'll keep this brief, as there's one important aspect to automation here:

YOU are automating YOUR OWN, EXISTING, expertise on a platform. This is not replacing YOU, nor is it making YOUR SKILLS IRRELEVANT. Those skills are still absolutely necessary. YOU will still have to hand-configure and explore equipment like you always have. The biggest change YOU will see is that you'll have more time to test configurations and making them more reliable, instead of performing some of the more boring tasks like editing text files.

For this reason, I'm not going to get very prescriptive on the what or the how from here, as this is an exploration exercise that will vary greatly based on the use case. Here are some quick guidelines while trying it out:
  • Keep it organized! The Jinja document's supporting YAML file is there for YOU to read. Make it easy to do so.
  • If you think you'll need it, add it. You man not have a use case for making MTU a variable currently, but it's seeing widespread adoption in the data center and campus networks - if you think you may change it someday in the future, add it into the documents.
  • Use with extreme prejudice against your configuration templates!
Now that the vast majority of the work here is done, let's focus on the no-code way to combine these files. For this, all you need is python and Ansible, and pretty much any version works. To achieve this, Ansible has a pre-installed module called template.

---
- hosts: localhost
  tasks:
    - name: Import Vars...
      include_vars:
        file: example-ios-switch-dictionary.yml
    - name: Combine IOS Stackable Leaf...
      template: 
        src: templates/example-ios-stackable-leaf.j2
        dest: example-ios-stackable-leaf.conf

..and that's it. Run it with the command ansible-playbook, and it will create a new file. Unfortunately, this requires one playbook per configuration, as the include_vars module doesn't unload anything from the YAML file.

Usage At Scale

This method scales extremely well - I have provided an example on Github (https://github.com/ngschmidt/j2-config-examples which leaves some standardized framework for keeping things organized, like using roles per device configuration, so it should be pretty easy to fork and expand to encompass multiple switches and multiple configuration standards, all in one repo.

In the real world, I use several Git repositories - the sheer quantity of templates and roles just gets out of control otherwise, and collaboration like using Git Pull Requests for continuous review and improvement (It's amazing what you can do with the saved time!) is much easier with that separation.

I've also generated an entire datacenter fabric configuration in seconds this way. Once you get your repositories organized, that's not even that big of a deal.

Demystifying CI/CD and Automation in General

You're already using automation. If you use Pull Requests to improve templates, you're simply formalizing previous practices you already did, but you also (probably) accidentally did CI/CD and network automation here.

A lot of DevOps gurus tend to treat automation work like it's the technological equivalent to inventing the wheel, and a lot of that is more to advance and protect the profession, and less a play to establish dominance / a place of power. Unfortunately, this tends to create a bit of a rift between them and the people they are there to help, but I've never seen that be intentional with DevOps engineers. They're developers, just like other ones, with a fiery burning passion for reducing boring, repetitive tasks for you, and making sure that the methods to do so are well-organized, and want to share those experiences. You don't need to give them a hug, but ask how they do stuff, it's probably the quickest way for you and them to learn something.

Sunday, December 29, 2019

Securing Dual-Stack (IPv4,IPv6) Endpoints with NSX-T

I have mentioned in a previous blog post that I'm not using any ACLs on my tunnel broker VM.


This is usually pretty bad, but again, we can get those protections outside of the VM - I'm using this to prove out how NSX-T can provide utility in this situation.

Solution Overview

VyOS is a fantastic platform, with a ton of rich, extensive features that can empower any network engineer to achieve greater outcomes. There's a lot of good stuff - here I'm using it as a tunnel broker, but we also have these other features:

Manageability


  • Configuration versioning: Any network platform with in-built configuration versioning (and its cousin, the wonderful "commit review" capability) gets a favorable vote in my book
  • API/CLI: The two have feature parity. It's source control friendly, as I have already shown
  • IPv6: You do not need an IPv4 management plane for this platform to work

Functionality

  • All routing protocols except IS-IS
  • All VPN functionality except VPNv4 (although EdgeOS, Ubiquiti's fork, has that. It shouldn't take long). This includes WireGuard and OpenVPN, and SIT as I used in this previous example
  • Full IPv6 support, including DHCPv6, RA, SLAAC, OSPFv3, MP-BGP, etc. The only thing missing is 6to4 for completely native IPv6 deployments
It'd be fair to say that VyOS is a fantastically capable router, which like Cisco ISR or any other traditional router, does have some downsides.

What's Missing - or What Could Be Easier

Just as a caveat, I do think we'll see this a lot with virtualized routing and switching. 

VyOS has always had a bit of a problem with firewalling. I've been using it since it was simply Vyatta, prior to Brocade's acquisition, and the primary focus of the platform has always been high-quality routing and switching. Functions like NAT and firewalling are disabled by default and have an extremely obtuse, Layer-4 centric interface for creating new rules. This gets messy pretty quickly, as the rules themselves consume significant configuration space and have to be carefully stacked to apply correctly. This interface is manageable but becomes difficult at scale.

Of course, if it was my entire job to manage firewall policies, I'd automate baseline generation and change modifications, the platform is pretty friendly for that. This may not necessarily be maintainable if it's not placed in an area easily discoverable by other engineers, and definitely doesn't resemble the "single pane of glass" I'd rather have when running a network.

What I'd like to see is a way to intuitively and centrally implement a set of firewall security policies against this device, in a way that can be centrally audited, managed, and maintained. Keep in mind - the auditing aspect is critically important, as any security control that isn't periodically reviewed may not necessarily be effective.

Fortunately, VMWare's NSX (or as it was previously known, vShield) has been doing this for quite some time. There are some advantages to this:
  • Distributed Firewall enforces traffic at the VM's NIC, but is not controlled by the VM. This means that you don't have to automatically trust the workload to secure it.
  • VM Guest Firewalling CPU/NIC costs don't impact the guest's allocation. This blade has two edges:
    • VM Guests don't need firewall resources factored into their workload, as it's not their problem. This allows for easy onboarding, as the application you're protecting doesn't have to be refactored.
    • VM Hosts need CPU to be over-provisioned, as this will be taken out of the host resources at a high priority. This being said, if you're going down the full VMWare Cloud Foundations / Software Defined Data Center (VCF/SDDC) it is important to re-think host overhead, as other components such as vSAN, HA do the same thing!

Securing Workloads

First - we need to ensure that the IPv6 tunnel endpoint VM is on a machine that is eligible for Distributed Firewalling. From the NSX-T homepage, click on the VM Inventory:

Then we select the IPv6 tunnel VM:
From here, let's verify those tags, as we'll be using that in our security policies:

We also need to add some IP Sets - this is the NSX-T construct that handles non-VM or non-Container addressing for external entities. Technically, East-West Firewalling shouldn't always be used for this, but IPv6 tunnel brokering is an edge case: (IP Sets guide here)
From here, you want to add the IP Sets to a group via tag membership - a topic I will cover later as it's vitally important to get right with NSX-T:
We also want to do the same with our virtual machines:



We're all set to start applying policies to it! Navigate over to Security -> East-West Firewalling -> Distributed Firewall:
Add these policies. I have obfuscated my actual addresses under groups for privacy reasons.

That's about it! If you want to add more tunnel nodes, you'd simply apply the tag to any relevant VM with NSX Manager, and all policies are automatically inherited.

Some Recommendations

  • If you haven't deployed a micro-segmentation platform, the #1 thing to remember is that distributed firewalling, because it captures all lateral traffic, generates a TON of logs, all of which happens to be invaluable troubleshooting data. I'd recommend rolling out vRealize Log Insight + Network Insight (vRLI/vRNI) to help here, but ELK stack will probably work just fine in a pinch. 
  • Have a tag plan! Retroactive refactoring of tags is a pretty miserable task, so try and get it at least well organized the first time.
  • Have a naming convention for all of the objects listed above! I'll write a skeleton later on and place on this blog, along with tagging strategies.
  • Make sure to set "Applied to" whenever possible, as this will prevent your changes from negatively affecting other data center tenants.
  • Try to use North-South firewalling (tier-0 and tier-1 edges ONLY) for traffic that leaves the data center. East-West wasn't really designed for that.
  • Try to use North-South firewalling, period. If a data center tenant (or their workload) is not globally trusted, assign that entity its own tier-1, making it really easy to wall off from the rest of the network. This is probably the easiest thing to do in NSX-T, and generates the most value!

Saturday, November 23, 2019

IPv6 Up and Running - Address Planning Basics and using a Tunnel Broker

First things first - let's cover some IPv6 basics.

What's Different

Many aspects of IPv6 is actually much easier than most people would expect - since there's such a large addressing space, entire fields of work with IPv6 go away.

Custom CIDR / Subnetting

Remember how you had to do binary math, and use your crystal ball to guess how many hosts will be on any given subnet? Well, if you use CIDR masks from /29 to /19 for individual subnets, that will be replaced with a /64. 

A great deal of functionality breaks if you use a subnet mask longer than /64 for generic devices - such as RA/DHCP. When setting up a network for any host-facing network, you need to remember only four masks:
  • /64: Use this everywhere
  • /126: Use like a /30, but ONLY when interconnecting network devices. You're not saving space by trying to use this for hosts.
  • /127: Use like a /31, but with even more flakey vendor support. This is more space efficient, but you need to verify that ALL of your equipment supports it, or deal with a really fragmented point-to-point prefix.
  • /128: Loopbacks

NAT

You don't need it, because it's IPv4 duct tape. Prepare yourself for a simpler life without it.

Private Addressing

IPv6 does take a different approach here - there are TWO "private" allocations:
  • Link-local addressing (fe80::/10): This addressing allocation is used on a per-segment basis, and pretty much just exists so that every IPv6 speaker will always have an IP address, allowing routing protocols to work on unnumbered interfaces, for example.
  • ULA (fc00::/7) Unique local addresses are on the should not be routed list, and should not be used, generally speaking. You have to use NAT Prefix translation to be globally routable, a feature that isn't well supported. I use this in my spine-and-leaf fabric examples to avoid revealing my publicly allocated prefix, and only in my lab.
Instead, IPv6 architecture focuses on the inverse - allocating prefixes you CAN use. Right now the planet (e.g. Earth, not kidding) has the Global (hehehe) allocation of 2::/3. All IPv6 prefixes are allocated out of this block by providers, using large allocations to ensure easy summarization.

DHCP

DHCPv6 is not mandatory, as SLAAC/RA Configuration can provide any client device with the default gateway and DNS servers. For enterprise applications, however, it is recommended to use DHCPv6 so you don't unintentionally disclose any information encoded into your IP by SLAAC, and so that your ARP tables aren't murdered by SLAAC privacy extensions. More here.

DNS

DNS actually isn't all that different anymore, but still deserves mention for a few reasons. 

The first reason why I think it deserves mention is because, as an application, its IPv6 journey was extremely well designed. 
  • IPv6 Constructs are available, regardless of which "stack" you're running: Global DNS Servers have a new (ish) record type, AAAA, that indicates that IPv6 is available for any service, and any DNS server should serve AAAA records, even if solicited on IPv6. This is useful in situations where your DNS server may have additional attack surface over IPv6, like Microsoft's Active Directory servers. It also helps make your migration strategy a bit smoother, as you implement the IPv6 stack progressively throughout your network.
Second, if you don't have AAAA resolving, IPv6 won't do much for you.

IPv6 Address Planning

IPv6 address planning is fundamentally different for the reasons listed above, but I do have some general guidelines that help establish a good starting point:
  • /48 and /56 are good site prefixes: Since we are using 8x the space in our FIB for each route, allocate a /48 or /56 depending on size per site, but don't do anything weird like allocating a /63 or a /62 to save space. Keep your sites consistent. A  /56 is the IPv6 equivalent of a /16 in IPv4 - you'll almost always be right allocating at this length.
  • Allocate the last 2 /64s in your prefix for point-to-point prefixes and loopbacks, respectively. It just keeps address fragmentation less messy, and you can summarize the /64s at your backbone to ensure that traceroute "just works".
  • You have lots of space, leave gaps between sites. If you get a /48, you have 255 sites to play with. You can block out entire regions, sites, in a myriad of ways to help your routing table "make sense".
Here's how I did it (/48 allocated to me, prefix is masked):
  • ffff:ffff:ffff:ffff::/64: Loopbacks
  • ffff:ffff:ffff:fffd::/64: Point-to-point links
  • ffff:ffff:ffff:e::/49: Allocated to NSX-T, because I don't have multiple sites in my lab. Don't do this in the real world, this is for various (messy) experiments with address summarization.
  • ffff:ffff:ffff:b::/49: Allocated to the underlay fabric. See above.
  • ffff:ffff:ffff:a::/64: Home campus network. This is where Pinterest, and other meatspace activities live.
I'm actually not using much else - I'm allocating large because IPv6 Address shortening makes it easier to type (P.S. IPv4 Address shortening works too, but there are fewer opportunities. Try and ping 1.1) and allocating properly would look like:
  • ffff:ffff:ffff::/56 for Site A (Maybe a headquarters location?)
  • ffff:ffff:ffff:001::/56 for Site B (Satellite office near HQ?)
  • ffff:ffff:ffff:008::/56 for Site C (in another geographic region or state?)
  • ffff:ffff:ffff:1::/56 for Site D (HQ in another country?)
Hopefully this is helpful - when in doubt, whiteboard it out.

Well that's nice, but I'd like to actually do something!

Let's go through the process of selecting a tunnel broker (this assumes you do not have native IPv6 connectivity, because this would already be done):

Step 1: Use Wikipedia's Cheat Sheet to select the best tunnel broker for you. Since I'm in the United States, I selected Hurricane Electric. I am biased by their educational outreach and certification program. I cannot recommend enough taking a crack at their Sage certification.
Step 2: Sign up using the links provided in the cheat sheet. If possible, ask for a /48 for maximum productivity.
Step 3: Establish a tunnel - I have provided a VyOS template here, but a great deal of networking equipment supports SIT tunneling, so it's not particularly difficult to set up. Keep in mind that there's no firewall enabled here, I wouldn't recommend the same approach, but I'm doing that elsewhere.
Step 4: Start experimenting!

Saturday, October 26, 2019

Anycast Stateless Services with NSX-T, Implementation

First off, let's cover what's been built so far:
To set up an anycast vIP in NSX-T after standing up your base infrastructure (already depicted and configured), all you have to do is stand up a load balanced vIP at multiple sites. NSX-T takes care of the rest. Here's how:
Create a new load balancing pool.

Create a new load balancer:
Create a new virtual server:
If your Tier-1 gateways have the following configured, you should see a new /32 in your routing table:
Repeat the process for creating a new load balancer and virtual server on your second Tier-1 interface, pinned to a completely separate Tier-0. If multipath is enabled, you should see entries like this in your routing table:


It really is that easy. This process can be repeated for load balancers, and (when eventually supported) multisite network segments.

A few caveats:

  • State isn't carried through: if you're using a stateful service, use your routing protocols (AS-PATH is an easy one) to ensure that devices consistently forward to the same load balancer
  • Anycast isn't load balancing: This is easy here, as NSX-T can do both. This won't protect your servers from overload unless you use one.
  • Use the same server pool: It was (hopefully) apparent that I used the same pool everywhere. Try to keep regional configurations consistent, to ensure that new additions aren't missed for a pool. Server pools should be configured on a per region or per transport zone basis.
Some additional light reading on anycast implementations:

Using VM Templates and NSX-T for Repeatable Virtual Network Deployments

So far, we've provided the infrastructure for continuous delivery / continuous integration, but it's been for those other guys . Is ...