Archive for February 2013
I am pushing way beyond my graphene-thick understanding of microprocessor and internet addressing technologies, but I am intrigued by stuff I have heard recently, and hope that someone out there can shed more light on the subject.
Current chip technology mostly works in 64-bit ‘words’. In other words a chip can process a single 64-bit word, or two 32-bit words every cycle. That’s handy, because a 64-bit processor can deal with two internet addresses that conform to the 32-bit IPv4 internet addressing scheme at the same time, speeding up data flows around the internet .
Unfortunately, the world has pretty much run out of 32-bit addresses, just as the BYOD and M2M initiatives are adding billions of devices to the internet, all competing with me for the network’s attention We are having to move to IPv6, which uses a 128-bit address scheme. That means it will take most chips two cycles to deal with an IP address, effectively quadrupling the time required to send a packet on its way. (Or you could split the address in two, run them in parallel through two 64-bit cores, and hope that what emerges still makes sense.)
Actually, because IPv4 and IPv6 will co-exist for a long time to come, probably it will take even more cycles to resolve an IP address because the processor must translate addresses between the two incompatible schemes.
In addition, router tables, the automated IP address databases, suddenly become much bigger. From looking for a needle in a haystack for an IPv4 address, digital postmen are now looking for the equivalent of Higgs’ boson.
According to telecoms consultant Martin Geddes, the internet is likely to become less reliable, not more, as packets start to queue, waiting for microprocessors to figure out the next leg of their journey across cyberspace.
“IPv6 is a waste of time and money,” he says in a newletter, Nuclear networking (subscription needed). “It is the wrong answer to the wrong question. It fails to tackle the fundamental problems of Internet Protocol: addressing the wrong thing (interfaces, not applications); tightly coupling the whole system; confusing naming and addressing; perpetuating hacks like DNS and Mobile IP to paper over the gaps; and a host of other sins condemning us to networking purgatory. Indeed, IPv6 will create a whole new slew of performance, security and implementation problems we have yet to fully experience.”
Geddes advocates we re-engineer the internet using RINA – Recursive InterNetwork Architecture. But he also says a commercially viable use of RINA is 10 years away. Now is a good time to start planning the transition, or at least how to splint the fundamentally broken internet with something more robust.
Of course, the for-now answer is to build 128-bit processors specially to cope with IPv6 addressing. Chips with 128-bit floating point registers have been around for quite a while, mainly for high performance computing like counting molecules or decrypting Skype calls in real time, apps that need a high degree of accuracy.
But resolving internet addresses has, until now, not really been an issue. It’s going to be, so how are we going to do it?
Over to you.
The end game for the traditional telco business model is already in play.
On 12 February the European Commission cut €8bn from its €9.2bn broadband fund. This was money that telcos expected would be coming to them to spend on fibre networks.
The next day, the ITU said, “The move to IP-based communications is irreversible – and the timescales for business models, regulatory frameworks, development cycles and infrastructure investment in the internet world and that of traditional telecommunications may be dangerously out of sync.”
And on Valentine’s Day, at the launch of its software defined network strategy, Huawei’s director of dolution marketing, Dai Libin, said telecoms operators “had to change their genes” if they are to survive. They can no longer afford to provide ever-faster performance if they cannot also reduce costs the way the computer industry has, he said.
That same day saw the official launch of B4RN, the community-funded point to point fibre to the home network in rural Lancashire. B4RN customers get a nominal 1Gbps symmetric service for a £150 connection fee plus £30/month, which you can halve if you want to give up your BT phone line and rely on Skype for voice calls.
In contrast, BT’s up to 330Mbps fibre on demand service, due out in spring, will cost £500 to connect and £38/month, plus a distance-related fee averaging £1,000. And it will be available only in BT’s fibre to the cabinet footprint. And you’ll have to hold on to your £15.45/month phone line.
Some at BT are certainly alive to the threats. Last October BT told ISPs about its new Multiservice Edge (MSE) roll-out that will see more than 500 data centres installed around the country on the “edge” of its network. This is to cope with greater consumer demand for data services, it said.
MSEs will give BT Vision subscribers a better quality experience because it cuts down the distance signals must travel. It will also improve subscribers’ experience of Netflix, Facebook, YouTube and other “over the top” (OTT) services too.
BT is also deeply involved with ETSI’s effort to standardise how certain network functions are virtualised; Don Clarke, BT’s head of network evolution innovation, is the working group’s technical manager, largely because he’s been studying the problem for the past two years.
Virtualising the network means that networks will be programmable. According to the pitch, it will be quicker and cheaper to provide and change services because all the devices in the data centre will be virtual machines and will run on very fast industry standard servers. Provisioning and changes will be done via a dashboard rather than physically patching cables and using command line instructions to install and set them up.
This situation pretty much is at least partly true already for core networks, if only because Cisco so dominates this market that it is effectively the industry standard. But this ETSI network function virtualisation (NFV) initiative is really about increasingly that agility across the entire network, even right into the home.
Clarke says his team wants to finish its initial work within 18 months. Telecom standards can take years or even decades to establish, so this urgency suggests a penny has dropped somewhere.
This software defined networking and/or NFV heralds so many changes in the traditional business models of equipment vendors and telcos that we could be at what the gurus call an inflection point. It is like the meteor that some say wiped out the dinosaurs.
It’s not the only source of change. So many subscribers are giving up their fixed line services for mobiles, or taking up cheaper offers from unbundled local loop operators like Sky and TalkTalk, Ofcom is reportedly toying with the idea that the duopoly enjoyed by BT and Virgin Media should end at the kerb rather than at the wall plug inside your house.
This could make it easier for new fibre network operators like B4RN and Gigaclear to compete with BT and VM (and may be partly why VM was sold to Liberty Global, a US-based European cable TV operator). This is because the home owner could, as they do in Scandinavia, dig his own trench to the kerb and connect to his service provider of choice. This would save the operator a lot of time, hassle and cost, around £100 per household.
There is already a robust public interconnect standard (Active Line Access), so in theory this should not be a problem.
However, BT is the monopoly fixed local access infrastructure provider in two-thirds of geographic UK. The reserved 800MHz mobile licence currently at auction will provide only a 2Mbps indoor connection. So for fibre to the kerb to happen on large scale Ofcom would have to revise the terms of BT’s physical infrastructure access (PIA) product. PIA’s costs, terms and conditions meant that none of the eight other network operators invited to join the BDUK purchasing framework for next generation access in rural areas was able to make money in competition with BT.
We can be sure BT (and other incumbent telcos) will continue to fight for its monopoly while building its replacement network. But will it run out of customers and money before the new network is fit for purpose?