Br0kenTeleph0n3

Following the broadband money

A numbers game

with 13 comments

I am pushing way beyond my graphene-thick understanding of microprocessor and internet addressing technologies, but I am intrigued by stuff I have heard recently, and hope that someone out there can shed more light on the subject.

Geddes - the internet is sooo broken

Geddes – the internet is sooo broken

Current chip technology mostly works in 64-bit ‘words’. In other words a chip can process a single 64-bit word, or two 32-bit words every cycle. That’s handy, because a 64-bit processor can deal with two internet addresses that conform to the 32-bit IPv4 internet addressing scheme at the same time, speeding up data flows around the internet .

Unfortunately, the world has pretty much run out of 32-bit addresses, just as the BYOD and M2M initiatives are adding billions of devices to the internet, all competing with me for the network’s attention We are having to move to IPv6, which uses a 128-bit address scheme. That means it will take most chips two cycles to deal with an IP address, effectively quadrupling the time required to send a packet on its way. (Or you could split the address in two, run them in parallel through two 64-bit cores, and hope that what emerges still makes sense.)

Actually, because IPv4 and IPv6 will co-exist for a long time to come, probably it will take even more cycles to resolve an IP address because the processor must translate addresses between the two incompatible schemes.

In addition, router tables, the automated IP address databases, suddenly become much bigger. From looking for a needle in a haystack for an IPv4 address, digital postmen are now looking for the equivalent of Higgs’ boson.

According to telecoms consultant Martin Geddes, the internet is likely to become less reliable, not more, as packets start to queue, waiting for microprocessors to figure out the next leg of their journey across cyberspace.

“IPv6 is a waste of time and money,” he says in a newletter, Nuclear networking (subscription needed). “It is the wrong answer to the wrong question. It fails to tackle the fundamental problems of Internet Protocol: addressing the wrong thing (interfaces, not applications); tightly coupling the whole system; confusing naming and addressing; perpetuating hacks like DNS and Mobile IP to paper over the gaps; and a host of other sins condemning us to networking purgatory. Indeed, IPv6 will create a whole new slew of performance, security and implementation problems we have yet to fully experience.”

Geddes advocates we re-engineer the internet using RINA – Recursive InterNetwork Architecture. But he also says a commercially viable use of RINA is 10 years away. Now is a good time to start planning the transition, or at least how to splint the fundamentally broken internet with something more robust.

Of course, the for-now answer is to build 128-bit processors specially to cope with IPv6 addressing. Chips with 128-bit floating point registers have been around for quite a while, mainly for high performance computing like counting molecules or decrypting Skype calls in real time, apps that need a high degree of accuracy.

But resolving internet addresses has, until now, not really been an issue. It’s going to be, so how are we going to do it?

Over to you.

About these ads

Written by Br0kenTeleph0n3

2013/02/19 at 08:01

13 Responses

Subscribe to comments with RSS.

  1. IMO I can’t see this being a problem for ordinary people because CPU’s tend to deal with information through a much lower-level of programming. I do see where you’re coming from but it might be more of an issue for major servers or data centres and even then I suspect that processing such things would only produce a negligible difference.

    Certainly this is not an issue that any of the ISPs we’ve spoken with have EVER raised as one of their concerns about IPv6. On the other hand I always tended to nod off in class when the teacher started talking about binary, machine code and ZzzzZz :) .

    Mark
    ISPreview.co.uk

    Mark (ISPreview)

    2013/02/19 at 08:26

  2. David,

    I recall you got together some people on IPv6. Should this be on the list for FISP to encourage debate on at some stage?

    Mike

    Michael Rowbory

    2013/02/19 at 08:35

  3. Don’t subnet masks solve this anyway ? Routers surely don’t bother with all of an IPv4 address only the significant parts bit like postcodes if it’s PE8 6QQ you get as far as the PE and send it to Peterborough sorting office.

    “The first sixty four bits (blue) are network bits, the remaining ones are the host’s interface identifier (host bits).” according to http://ciscoiseasy.blogspot.co.uk/2011/05/lesson-56-introduction-to-ipv6-address.html so the 64 bit routers get it to your network and your 64 bit router moves it around internally ?

    PhilT

    2013/02/19 at 09:10

  4. it’s absolute nonsense.

    We have been using IS-IS in IP networks for over 20 years they look like this :

    49.0001.00a0.c96b.c490.00
    49.0001.2081.9716.9018.00

    they are 160bits!

    regards,
    Neil

    neilmcrae

    2013/02/19 at 09:30

    • Sorry to be dim, but why the hype over IPv6 then? What does IS-IS give you that IPv6 doesn’t (apart from more potential addresses)? Or is it that IPv6 is the address book, and IS-IS is the instructions of how find the quickest way to deliver a letter to that address?

      Br0kenTeleph0n3

      2013/02/19 at 22:32

      • there is no link IS-IS is an internal routing protocol that works and scales well.

        neilmcrae

        2013/02/20 at 07:43

      • Thanks Neil. So does that mean you still have to work with IPv6 addresses, and if so, what does that mean for the ASIC makers? Will they be able to cope once BYOD and M2M really get off the ground?

        Br0kenTeleph0n3

        2013/02/20 at 14:20

  5. Some confusion here.Double the number of bits in a cpu architecture does not mean it could have 2 IPV4 addresses processed at the same time in a single instruction. Routing a packet will take 10 of 1000’s of instructions so with pipelining, caching and the other meriad of techniques that modern processors use, the number of bits in a cpu architecure does not have anywhere near the impact suggested above.

    However, modern processors have multiple hardware threads in a single core, so one core could process say 8 threads of execution at the same time and each one of those could handle a packet each. The trend of having more hardware threads on a core will continue for a while yet, even if the processors don’t get that much faster in terms of clock speed.

    Read Computer Architecture : a quantatative approach by Hennasy and Patterson to get the proper treatment as to why the conclusions in this article are somewhat off base.

    Clive

    2013/02/19 at 10:07

  6. You don’t need to worry about 128 bit processors and routing of IP Addresses. The heavy lifting of IP routing has been done using custom logic in FPGA (Field programmable Gate Array) or custom ASIC (Application Specific Integrated Circuit) for a long time now. They are much faster than any processor. The amount of routing that the average server / PC CPU needs to concern itself with is so low that using many cycles is not an issue. In fact it will be using a few cycles for 32 bits already. Processors are so fast that the issue is more to do with latency of getting data in to and out of the processor from memory. Fast memory is very expensive, hence why processors have only a few MB of cache, of which only a few KB is running at the processor clock speed. The rest is running at a fraction of the core speed. Lastly the multi GB of RAM that we have is running like a snail compared to the processor.

    There is no need for floating point for decryption of Skype. Floating point is only required when dealing with non-integer numbers as speed. Skype uses AES for encryption which is integer based, and the sound samples can also be integer samples.

    Bob

    2013/02/19 at 10:47

  7. Have to agree with Neil above. There is no valid correlation between either the endian or register width of a CPU and the number of bits in an Internet address. Routers and switches that scale do it all in custom silicon, such as FPGAs. Mr Geddes is proposing something similar to “if we were all using hydrogen powered cars we wouldn’t gave a fossil fuel crisis”. Events have happened that have brought us to the place we are today – businesses don’t understand their business processes sufficiently to design protocols capable of scaling the way IPv4, IPv6 and DNS have (not to mention the unseen heros such as OSPF, IS-IS, etc).

    Apples with pears on this one I feel Ian.

    RobL

    2013/02/19 at 13:42

    • Firstly, thank you to everybody who is trying to enlighten me. I know it’s an uphill battle for you, but please persevere. Hopefully it will also enlighten others who read this blog.
      Secondly, I did warn everyone that my knowledge in this area is as thick as graphene, so thanks again for making the effort.
      If we accept that the routing processors are fast enough, in fact are waiting for data to process, isn’t that a problem in itself – queues will form somewhere in the system, and that they will lead to latency, jitter and dropped packets?
      What about the routing tables issue? Surely as more IPv6 addresses go live, the routing tables must expand? Do we have to depend on Moore’s Law to get us through?

      Br0kenTeleph0n3

      2013/02/19 at 22:49

      • I think you have some very good answers above. And your last question here may be right. But only if you assume that the only way to handle a massive increase in number of addresses and traffic is by increasing switches and routers capacity to send packets the right way. Without restructuring networks or applying smart new ideas.

        But it’s not. Instead all sorts of activities goes on to help us out. One thing is that when services grows they get distributed. Such as Facebook opening new regional data centres to cope with demand thus routing traffic to and from the nearest facility. Which also happens to mirror a very common user pattern – Facebook is very often used to communicate with friends and relatives also geographically close by. And in most cases this also means nearby in terms of the topology of the internet. And routing packets locally will only have to deal with parts of the whole address as someone pointed out above.

        And one of the smart things with Spotify is that it applies a combination of central data and peer-to-peer technology to quickly start a tune while at the same time find and stream as much as possible from peers nearby in the network. Should work better the more users there are.

        Yet another process means that the physical topology of the network gets more and more exchange points such as NetNod or Linx. Thus traffic can be routed more efficiantly and direct also on tier 1 and 2 level of the network. It resembles a mesh with ever smaller sub-parts allowing for smarter routing where each level of routers once again may only deal with it’s part of the address.

        So, indeed, congestions may happen but we – or should I say all service providers, network operators and owners of networks – will deal with them in a multitude of ways. Of course including – but not limited to – to increase each physical router’s capacity to sort out and forward packets.

        Tobias Ahl

        2013/02/26 at 00:01

      • @Tobias Fair points, all of them. And I think the emergence of content delivery networks makes your points.
        However, to change the discussion slightly, it seems to me we are looking at a change in the philosophy that governs network development. (Voice-based) telecommunications was all about predictability which it got from standards and interoperability. The industry did a very good job at being predictable for 140 years.
        But the next bit is data driven, and pragmatic expediency rather than predictability is the guiding light. An example – the internet follows a ‘best effort’ rule. This would not be acceptable in the former voice world, where you either had a connection or not.
        So the question arises, what service level agreements will emerge as the carriers switch to Ethernet/IP and switch off their TDM networks, and what sanctions will users, especially retail consumers, have against suppliers who let them down?

        Br0kenTeleph0n3

        2013/03/01 at 00:48


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 967 other followers

%d bloggers like this: