Following the broadband money

How to avoid the coming broadband catastrophe

with 12 comments

Broadband ought to be profitable to network operators, but a flawed understanding of the operational realities of the change from circuit switching to statistical multiplexing of packets has led to inappropriate business models and disastrous incentives for management.

Unfortunately, acknowledging the truth and acting on it involves huge risk for the first operator to break ranks. This is because it calls on users to pay proportionately for the quality of experience they want rather than passively accept paying for an undifferentiated pipe that may or may not be fit for service.

The risk is that customers will leave for the deluded or at least mistaken promises of its competitors, says Martin Geddes, founder of Martin Geddes Consulting and former strategy director of BT’s Innovate and Design department.

(The password is PolyService for this 8.20min video interview with Geddes

MartinGeddesFutureofBroadband from Ian Grant on Vimeo.


It took 14 hours to upload.  Network fit for purpose? I don’t think so. )

Geddes and colleagues Neil Davies and Peter Thompson of Predictable Network Solutions have spent a year refining the empirical evidence of network behaviour into a radical new description of broadband economics. They unveiled it yesterday to a select but critical group that included sceptical collaborator Dean Bubley, whose counter views have helped to refine the model, Geddes says.

Br0kenTeleph0n3 was allowed to attend under Chatham House rules, which means we can report but not identify who said what. But we can say delegates included representatives from incumbent telcos, altnets, mobile network operators, service providers, regulators, investment companies and trade associations.

The basic thesis is that users should be able to decide and pay for the quality of experience they require. The corollary is that operators should enable them to split the service so that applications or transactions that require higher bandwidth receive it for the duration, and that those with less time-critical needs are shifted to times when the network is not as busy.

This would lower peak traffic demand and raise average network utilisation. This saves operators’ capex and optimises sustainable revenues by tying them directly to the cost of providing the requested service.

This linking of supply with demand means that operators no longer have to overprovision network capacity and that they get a fair price for the quality of service they deliver.

However, it requires them to change their mindset from selling a “monoservice” (packets as circuits) to providing a “polyservice” (different user experiences), and to rebuild their billing systems.

One needs to think of networks as markets, or in Geddes’ term, trading platforms, where users can trade packet deliveries based on time. For example, a movie download would require a lot of bandwidth for the first three minutes to get enough to the user to enable them to start watching, but the next 87 minutes could be delivered at a cheaper rate that matched the viewing behaviour on a just in time basis. Similarly, a user could order a movie for the following week, which would then be downloaded in a quiet period, at marginal cost, and stored locally until needed. A streamed video or voice call, which requires a constant connection, would be charged for accordingly.

Geddes argues that all data flows are basically rivals to each other for bandwidth. Everyone else’s data “pollutes” your flow, as Geddes puts it.

But because not all applications require the same quality of service (especially time), they can be traded, with less time-critical flows being traded for more valuable ones, and priced accordingly.

This overcomes the problem of the Over the Top operators that some telcos claim are currently getting a free ride on the networks. “It would pay BT to pay Akamai to mark those packets that could be delivered later because of the capex saving it would enjoy,” said one delegate.

A feature of network behaviour is that the problems arise in the access network ie, the bit between the user and the local exchange. Traffic delays in a network are “composable”, which means adding more follows well-known laws when in very large volumes, such as in the core network.

But even a few low bit rate flows in access networks can cause “bad coincidence” or demand spikes that lead to delays and loss of packets largely because of the way the internet protocol manages data flows. Network operators presently work around this by reserving some bandwidth, say the first 100kHz for VoIP or IPTV traffic. This perversely destroys the benefits of statistical multiplexing by creating permanent virtual circuits, which are totally wasted if not used.

This is why “throwing bandwidth” at the problem is an expensive, ineffective solution, although very good for equipment vendors. It is also why investors have largely lost faith with telcos’ promises of returns on investment, especially for large transformational projects such as next generation broadband.

Geddes argues for telcos to adopt a new approach to network planning and design called AREA (for [user] Aspiration, [network] Requirement, Execution and Assurance).

Applying these principles to an incumbent’s network upgrade allowed the telco to delay its spend by seven to 13 months while improving the users’ quality of experience, Davies says. This was because between 40% and 60% of the traffic was peer to peer, mostly to back up data. Time shifting reduced this to 10%.

There was a consensus that operators’ engineers were either explicitly or implicitly aware that the traditional business model, which regards services as “pipes”, is increasingly at odds with reality. The introduction of new devices such as an iPhone has caused major problems, and these are likely to get worse, just as users are coming to expect a broadband connection as a human right.

But some are taking action. Many people died needlessly because their medical records were destroyed in the devastating Japanese tsunami. In the aftermath, the government, telcos and users decided that the replacement network had to be secure, resilient and built around human needs rather than telco business models. This is leading to them adopting some of the principles set out above.

Present western telco executives are ignoring the problems, hoping nothing on a tsunami scale happens before their pensions and share options vest. Regulators are more interested in perpetuating themselves than in setting up conditions where regulation can be less intrusive. And politicians are easily swayed because of their short term focus and lack of technical expertise.

A perfect storm is building.


Written by Br0kenTeleph0n3

2012/11/13 at 07:01

12 Responses

Subscribe to comments with RSS.

  1. I remember an organisation who tried to position something similar (without the extensive research) with Ofcom. It was called an ‘upto’ price for an ‘upto’ service. Or the ‘upto’ campaign….

  2. Reblogged this on ytd2525 and commented:
    Add your thoughts here… (optional)


    2012/11/13 at 09:23

  3. The Internet Engineering Task Force opted for Generous Provisioning after each failed attempt to get a consensus on QOS. The notion of managed data flows working within a finite system with known operational parameters works conceptually, but demands a level of end to end control which may be impossible to establish.

    The insights on busy hour resources and how these change as network load increase suggests Predictable Network Solutions work should be included in Ofcoms Infrastructure report. For those interested in transparency about what your paying for, and the network planning rules used to create ISP packages then their work should be compulsory reading.

    Jumping from network analysis to offering a solution is a much bigger step, and while the notion of new billing platforms and control for the operator, with lower capex will sell consultancy, it underplays the need to make the data which describes the resources that constitute our internet access services publicly available so an informed debate can occur.

    In terms of setting traffic priorities, I would love this router setting to be under the control of the user. The BT VOIP service I think is priority 2, less important (to BT not me) than BT’s Vision service which is priority 1. It is not my router so I am not allowed to change the setting which is shame as it impacts Video Telephony quality.

    The Predictable Network Solutions model probably supports an option where I get to set the traffic priority but this would be a billable value add option. The founders of the internet would squirm.

    NGA for All

    2012/11/13 at 11:39

  4. […] see that Ian Grant (usually focussed on problems with raw speed and/or procurement scandals) has blogged on what looks to have been a most interesting and informative meeting on some of the issues, albeit he has used the kind of doom and gloom (impending crisis) […]

  5. The existing fiber network can provide sufficient bandwidth for decades to come with only the routers needing to be upgraded as this new light-twisting technology emerges

    Jim Fell

    2012/11/19 at 16:38

    • Hi Jim – thanks for the comment. I don’t disagree, but as Geddes and Davies point out, the problem is not with the core networks, which are pretty much all fibre anyway, but with the still largely copper based access and ‘middle mile’ networks. Simply, there needs to be end to end fibre links, and the sooner we accept that and get on with making it happen the better. Only, it should not take, as some incumbents would like 30 to 40 years. That’s simply not on.


      2012/11/19 at 17:12

      • In that case, I think you may find Google’s fiber project of interest.

        Jim Fell

        2012/11/19 at 17:20

      • Still the question is how should ‘end to end fibre’, which is actually just the customer end, be funded when people just want to pay £15/month max.

        ‘Speed can be dangerous’. Geddes does not say there has to be fibre.

        What do you call the ‘middle mile’?

        Sad that an intelligent discussion is reduced to quotes like ’30 to 40 years’. Any evidence? FTTP, FOX?

        Another comment here –


        2012/11/19 at 18:24

      • “We know that we are eventually going to end up at an all-fiber network, said Wim De Meyer, vice president of network planning at Belgacom, but the question is really the timeframe. We cant get there in a couple of years. Its going to be a long process and could take us thirty or forty years to get there.


        2012/11/19 at 23:04

      • And maybe mobile will remove the need for fixed connections particularly as people are finding they can exist without a landline. If the tariff is right and the speed is there.


        2012/11/19 at 23:22

  6. There’s a nice book about this idea:

    Titel: Technical, Commercial and Regulatory Challenges of QoS: An
    Internet Service Model Perspective (The Morgan Kaufmann Series in
    Networking) [Hardcover]
    Autor: XiPeng Xiao
    Verlag: Morgan Kaufmann; illustrated edition edition (September 22, 2008)
    ISBN-13: 978-0123736932

    It basically says: Yes, you can implement a differentiated service, but it’s
    not worth the money. Adding more bandwidth is cheaper. The autor built one of
    those QoS-enabled ISPs, so he knows about the details.

    Kurt Jaeger

    2012/11/24 at 20:28

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: