--- My Inspiration ---
Looking for a house I can afford, and a mortgage to afford it with. It's not fun. But you have to do the research. Turns out there are some really good sources. Money Saving Expert's new mortgage calculator at https://www.moneysavingexpert.com/mortgages/best-buys/ is one, and the articles on http://www.charcol.co.uk are another.
One article in particular (http://www.charcol.co.uk/knowledge-resources/mortgages-and-me/article/escaping-the-london-doughnut-13575/) made interesting reading for me as it compared the idea and costs of buying outside of London against those of buying inside London. This is an idea I am trying to understand because it is not as obvious as it first seems. The general proposition is that London is expensive and outside London is cheaper, but a lot of it hinges on "averages", a metric which can be very misleading. So I wrote down some thoughts here and added them to the comments on the article.
--- My Thoughts ---
As a first time buyer comparing options both inside and outside London, I think the argument made in the article is academically true and well made, but in reality it is somewhat flawed.
---- Average house prices - bad metric? -----
One the accepted truths about living in a city is the "living in a box" scenario. We accept the reality of the lack of space. But I don't know anyone who wouldn't choose a slightly bigger property when looking outside London. Even if you get more square metres per pound outside London the idea that someone who could live with a 524k mortgage repayments in London is only going to take around 55% of that for a property outside seems unlikely. More likely that they will see 524k as the budget. Being smart they will try to reduce it but not down so low as to allow the average inside London to be compared with the average outside, because if this is a home, and a one-off investment, would you not want the best living space you can get? More likely you are going to spend 70% to 80% of the budget, still saving some but no where near as much.
----- Long term cash or short term cash? -----
If you have the deposit for a 524k mortgage, then you have got an even bigger deposit for the outside London mortgage. It would seem wise to use what you can in the deposit to reduce the total amount that you will have had to shell out by the end of the mortgage but that means making way less savings than suggested in the article.
----- Train fares in the UK are expensive -----
The best way to save money on train fares is the 12 month season ticket. But that requires a hefty payment once a year. In fact twice a year if you are buying with your partner and both of you work in London. It may in fact be enough to rule out living in certain commuter towns. Alternatively you could reduce the deposit and bank some cash for this purpose, or you could rein in the property value. But then what's the point of going outside London and losing all its events and activities if you can't have a better property outside of London?
I think the calculations are not as far apart as they first appear.
Monday, 27 October 2014
Monday, 11 November 2013
Inspired by... Tech - "Making up protocols for an SDN world"
--- My Inspiration ---
The Packet Pushers podcast "Show 166 - SDN Controller Strategies" http://packetpushers.net/show-166-sdn-controller-strategies/
(For those unfamiliar, PacketPushers.net is home to great podcast discussions on some seriously in depth network topics. As I am not a network engineer some of it is way over my head but I've learnt some good stuff from it and occasionally they diverge into server and cloud discussions which I find really interesting to hear about from the network engineers point of view.)
On this episode Greg Ferro, Brent Salisbury, and Mike Dvorkin talk about Software Defined Networking particularly in relation to managing the communication requirements of applications and servers. It is a really interesting topic as it is a cross over point between the server engineers "Infrastructure As A Service" goal of designing infrastructure to provision servers automatically, and the network engineers "Software Defined Network" goal of automating network objects to provision routes as needed by traffic and to do so using more attributes than simply the layer 3 source and destination addresses to allow more intelligent routing choices to be made.
The question is, how do the newly spun up servers and their applications trigger the creation of the network paths they need?
--- My Thoughts ---
The servers/applications need to make a request to a network controller of some sort. This got me thinking about one probably crazy and un-feasible solution....
What simple examples of similar things do we have today?
DHCP? When a server boots and it wants to pass traffic over the network it must check in with a "network controller" in the form of a DHCP server. Although in this case it is the operating system asking for new or a renewal on an IP address, it is in a way asking for permission to communicate to other IP addresses on the subnet, although maybe not beyond due to blocking further down the network due by firewalls or load balancers. This is because firewalls and loadbalancers who are the passport authority for layers 4 to 7 don't generally integrate with switches or DHCP servers which have authority for the local areas of layers 2 and 3. But integration between these guardians is where we are headed.
So what if we could take our Dynamic Host Configuration Protocol and extend it by lifting it's attributes up the stack to include up to layer 7 requirements that could be passed to a network controller, instead of a DHCP server, and this network controller would see to it that the network paths could be set up across all the other network guardians? DANCP? - Dynamic Application Network Configuration Protocol - probably not catchy enough...
The Packet Pushers podcast "Show 166 - SDN Controller Strategies" http://packetpushers.net/show-166-sdn-controller-strategies/
(For those unfamiliar, PacketPushers.net is home to great podcast discussions on some seriously in depth network topics. As I am not a network engineer some of it is way over my head but I've learnt some good stuff from it and occasionally they diverge into server and cloud discussions which I find really interesting to hear about from the network engineers point of view.)
On this episode Greg Ferro, Brent Salisbury, and Mike Dvorkin talk about Software Defined Networking particularly in relation to managing the communication requirements of applications and servers. It is a really interesting topic as it is a cross over point between the server engineers "Infrastructure As A Service" goal of designing infrastructure to provision servers automatically, and the network engineers "Software Defined Network" goal of automating network objects to provision routes as needed by traffic and to do so using more attributes than simply the layer 3 source and destination addresses to allow more intelligent routing choices to be made.
The question is, how do the newly spun up servers and their applications trigger the creation of the network paths they need?
--- My Thoughts ---
The servers/applications need to make a request to a network controller of some sort. This got me thinking about one probably crazy and un-feasible solution....
What simple examples of similar things do we have today?
DHCP? When a server boots and it wants to pass traffic over the network it must check in with a "network controller" in the form of a DHCP server. Although in this case it is the operating system asking for new or a renewal on an IP address, it is in a way asking for permission to communicate to other IP addresses on the subnet, although maybe not beyond due to blocking further down the network due by firewalls or load balancers. This is because firewalls and loadbalancers who are the passport authority for layers 4 to 7 don't generally integrate with switches or DHCP servers which have authority for the local areas of layers 2 and 3. But integration between these guardians is where we are headed.
So what if we could take our Dynamic Host Configuration Protocol and extend it by lifting it's attributes up the stack to include up to layer 7 requirements that could be passed to a network controller, instead of a DHCP server, and this network controller would see to it that the network paths could be set up across all the other network guardians? DANCP? - Dynamic Application Network Configuration Protocol - probably not catchy enough...
How do we get the applications requirements?
Thinking again about similar interactions today, we could model it on something like the way the application stack calls DNS. On the event of a server starting up, or an application starting up, the application would need to pass its network requirements as a call to a code library that in turn calls the operating system DANCP resolver service. The requirements might include the destination FQDN, the port number, maybe a root url path and maybe even some sort of arbitrary priority for the communication in the personal context of the server's other communications, such as communications to the database are higher priority than communications to a log server. This way the application is making a statement of its requirements up to layer 7 and potentially beyond. The local operating system DANCP service might report that it already has the network requirements resolved, but in an initial boot instance it would then request these layer 3 to 7 requirements from a DANCP server / network controller, which triggers the paths necessary.
Where is the work done?
This all presumes that the system in charge of spinning the new server up does not take responsibility for requesting or setting up network paths. In an elastic cloud system where you are expanding something like a web tier horizontally you may want the design to be such that the web tier servers are very simple worker bees that should not need to think about network access requests and also that they may then be identical units of compute so it might make sense for the system spinning them up to manage the network path requests. However if we are outside of elasticating a tier of compute and have more of a single instance scenario then it starts to make more sense to have the spun up server arrange its own network needs - hence DANCP!
Friday, 25 October 2013
Inspired by... Tech - "ZDNet's 'A look at a 7,235 Exabyte world' "
--- My Inspiration ---
My first blog post was meant to be a perfectly constructed informative "how to", or a well balanced opinion article with multiple sources, or just something carefully thought through, but it's taken me so damn long to post anything that I thought I should just get the ball rolling by posting a comment that I've just written on a ZDNet article (under my "jamfuse" handle).
The article is by Larry Dignan, who I hold in pretty high regard. He wrote an article on an IDC report about global data storage and business. You can see it here http://www.zdnet.com/a-look-at-a-7235-exabyte-world-7000022200/
--- My Thoughts ---
There is some good stuff here but I think it begins to enter the realm of conjecture upon conjecture, and it seems to be somewhat static thinking, maybe not taking account of the fluidity that we are seeing entering system design and function.
"Tapes... will be tossed.." - well generally yes but they are incredibly dense mediums and don't suffer electro-static decay in the way that offline harddrives can do.
"...information will be viewed as a natural resource" - Ok are we talking "information" or "raw data"? For the latter, well as a basic example you can pipe a loop to /dev/null all day, there is no way that raw data is going to run out so it's not a natural resource. For the former, for information to be created we require CPU time, memory, and access to raw data (which we know can be infinite) and *maybe* storage if we want to keep the information we have created. Storage is the only one that is not transient. So as long as we adhere to the formula which goes something along the lines of...
"the cost of creating/buying storage dense enough to hold the information is less than or equal to the financial value of having the information"
... (which may be the formula the NSA use to justify their data centres) then we will always collect more information and therefore information is not a natural resource. Possibly the only way these could be considered natural resources is that if you chain the dependent parts of their production all the way to the extreme you end with electricity which may indeed rely on a natural resource! Don't get me wrong though there are plenty of big companies that would love you to thing that data is a natural resource, such as ISPs, that way they could justify charging ridiculous amounts for bandwidth caps! Do ISPs see a yearly/bi-yearly cycle of data floods and droughts?
"Owning the data will be everything. The vendors that capture the most data win. Period." - No, definitely not "Period". That's a silo. And that's a problem. For me this is an example of thinking from my generation or even the one before mine. Do kids these days silo their music or stream it? And when they are older and designing the systems that we are designing today I suspect they will take a different mentality - stream when you need it. If your systems get so bloated that they can't move quick that's a problem. Web 1.0 was all about hypertext, basically just text for human consumption. Web 2.0 was about rich content, video, sound etc, for human consumption. Web 3.0 is said to be "the internet of things" and therefore a network where one object can talk to another object, data available via APIs in a machine readable form - data for machine consumption. So although we could create silos so long as the storage costs equal the benefit of the store, it will make our systems so bloated that we can't turn them and change that quickly, we can't be agile in an era in which we need to be agile. But along side the option to store everything we see more and more options to fetch things via more and more APIs. This brings an era of companies or databases storing one type of data and the option for our systems to subscribe to many sources and create the information we need when we need it. Store the fundamental elements, don't store the compound that is created from the elements and that can be recreated from the elements.
If we all silo up to the max we will duplicate everything many times and collapse in our own bloat. If we keep the data or information that is key to our businesses and offer/fetch via API then we can stand on each others shoulders and progress and prosper. The vendors that create the most understandable information from diverse raw data sources will win. (In my opinion.)
My first blog post was meant to be a perfectly constructed informative "how to", or a well balanced opinion article with multiple sources, or just something carefully thought through, but it's taken me so damn long to post anything that I thought I should just get the ball rolling by posting a comment that I've just written on a ZDNet article (under my "jamfuse" handle).
The article is by Larry Dignan, who I hold in pretty high regard. He wrote an article on an IDC report about global data storage and business. You can see it here http://www.zdnet.com/a-look-at-a-7235-exabyte-world-7000022200/
--- My Thoughts ---
There is some good stuff here but I think it begins to enter the realm of conjecture upon conjecture, and it seems to be somewhat static thinking, maybe not taking account of the fluidity that we are seeing entering system design and function.
"Tapes... will be tossed.." - well generally yes but they are incredibly dense mediums and don't suffer electro-static decay in the way that offline harddrives can do.
"...information will be viewed as a natural resource" - Ok are we talking "information" or "raw data"? For the latter, well as a basic example you can pipe a loop to /dev/null all day, there is no way that raw data is going to run out so it's not a natural resource. For the former, for information to be created we require CPU time, memory, and access to raw data (which we know can be infinite) and *maybe* storage if we want to keep the information we have created. Storage is the only one that is not transient. So as long as we adhere to the formula which goes something along the lines of...
"the cost of creating/buying storage dense enough to hold the information is less than or equal to the financial value of having the information"
... (which may be the formula the NSA use to justify their data centres) then we will always collect more information and therefore information is not a natural resource. Possibly the only way these could be considered natural resources is that if you chain the dependent parts of their production all the way to the extreme you end with electricity which may indeed rely on a natural resource! Don't get me wrong though there are plenty of big companies that would love you to thing that data is a natural resource, such as ISPs, that way they could justify charging ridiculous amounts for bandwidth caps! Do ISPs see a yearly/bi-yearly cycle of data floods and droughts?
"Owning the data will be everything. The vendors that capture the most data win. Period." - No, definitely not "Period". That's a silo. And that's a problem. For me this is an example of thinking from my generation or even the one before mine. Do kids these days silo their music or stream it? And when they are older and designing the systems that we are designing today I suspect they will take a different mentality - stream when you need it. If your systems get so bloated that they can't move quick that's a problem. Web 1.0 was all about hypertext, basically just text for human consumption. Web 2.0 was about rich content, video, sound etc, for human consumption. Web 3.0 is said to be "the internet of things" and therefore a network where one object can talk to another object, data available via APIs in a machine readable form - data for machine consumption. So although we could create silos so long as the storage costs equal the benefit of the store, it will make our systems so bloated that we can't turn them and change that quickly, we can't be agile in an era in which we need to be agile. But along side the option to store everything we see more and more options to fetch things via more and more APIs. This brings an era of companies or databases storing one type of data and the option for our systems to subscribe to many sources and create the information we need when we need it. Store the fundamental elements, don't store the compound that is created from the elements and that can be recreated from the elements.
If we all silo up to the max we will duplicate everything many times and collapse in our own bloat. If we keep the data or information that is key to our businesses and offer/fetch via API then we can stand on each others shoulders and progress and prosper. The vendors that create the most understandable information from diverse raw data sources will win. (In my opinion.)
Friday, 26 July 2013
Inspired by... Blogs - "Time to have a go"
I have read and continue to read a lot of great blogs, be they tech examples, tech theory and opinion, or something from many different walks of life. I am very grateful that people take the time to actually put words and thoughts out there for others to consume. I have long wanted to do the same, either to give something back, or to create something, or simply to write ideas down in some vague attempt to remember them!
So this is the start of my attempt to do that. Hopefully it will continue. Maybe some people will like it. Let's see what happens.
jascbu
So this is the start of my attempt to do that. Hopefully it will continue. Maybe some people will like it. Let's see what happens.
jascbu
Subscribe to:
Posts (Atom)