Monday 11 November 2013

Inspired by... Tech - "Making up protocols for an SDN world"

--- My Inspiration ---

The Packet Pushers podcast "Show 166 - SDN Controller Strategies"  http://packetpushers.net/show-166-sdn-controller-strategies/

(For those unfamiliar, PacketPushers.net is home to great podcast discussions on some seriously in depth network topics. As I am not a network engineer some of it is way over my head but I've learnt some good stuff from it and occasionally they diverge into server and cloud discussions which I find really interesting to hear about from the network engineers point of view.)

On this episode Greg Ferro, Brent Salisbury, and Mike Dvorkin talk about Software Defined Networking particularly in relation to managing the communication requirements of applications and servers.  It is a really interesting topic as it is a cross over point between the server engineers "Infrastructure As A Service" goal of designing infrastructure to provision servers automatically, and the network engineers "Software Defined Network" goal of automating network objects to provision routes as needed by traffic and to do so using more attributes than simply the layer 3 source and destination addresses to allow more intelligent routing choices to be made.

The question is, how do the newly spun up servers and their applications trigger the creation of the network paths they need?



--- My Thoughts ---

The servers/applications need to make a request to a network controller of some sort. This got me thinking about one probably crazy and un-feasible solution....


What simple examples of similar things do we have today?

DHCP? When a server boots and it wants to pass traffic over the network it must check in with a "network controller" in the form of a DHCP server. Although in this case it is the operating system asking for new or a renewal on an IP address, it is in a way asking for permission to communicate to other IP addresses on the subnet, although maybe not beyond due to blocking further down the network due by firewalls or load balancers.  This is because firewalls and loadbalancers who are the passport authority for layers 4 to 7 don't generally integrate with switches or DHCP servers which have authority for the local areas of layers 2 and 3. But integration between these guardians is where we are headed.

So what if we could take our Dynamic Host Configuration Protocol and extend it by lifting it's attributes up the stack to include up to layer 7 requirements that could be passed to a network controller, instead of a DHCP server, and this network controller would see to it that the network paths could be set up across all the other network guardians?  DANCP? - Dynamic Application Network Configuration Protocol - probably not catchy enough...


How do we get the applications requirements?

Thinking again about similar interactions today, we could model it on something like the way the application stack calls DNS. On the event of a server starting up, or an application starting up, the application would need to pass its network requirements as a call to a code library that in turn calls the operating system DANCP resolver service.  The requirements might include the destination FQDN, the port number, maybe a root url path and maybe even some sort of arbitrary priority for the communication in the personal context of the server's other communications, such as communications to the database are higher priority than communications to a log server. This way the application is making a statement of its requirements up to layer 7 and potentially beyond. The local operating system DANCP service might report that it already has the network requirements resolved, but in an initial boot instance it would then request these layer 3 to 7 requirements from a DANCP server / network controller, which triggers the paths necessary. 


Where is the work done?

This all presumes that the system in charge of spinning the new server up does not take responsibility for requesting or setting up network paths. In an elastic cloud system where you are expanding something like a web tier horizontally you may want the design to be such that the web tier servers are very simple worker bees that should not need to think about network access requests and also that they may then be identical units of compute so it might make sense for the system spinning them up to manage the network path requests. However if we are outside of elasticating a tier of compute and have more of a single instance scenario then it starts to make more sense to have the spun up server arrange its own network needs - hence DANCP!