NATO’s new role in tackling cyber threats
07 Dec 2016 – 15:00 | No Comment

We may not see cyber-attacks but they are happening every day, and with increasing severity. In the UK, 90% of large organisations have reported cyber breaches over the last two years and the average cost …

Read the full story »
International

EU Health

Transport

Circular Economy

Climate Change

Home » Policy

Cloud Computing – New Paradigm or Symptom?

Submitted by on 09 Mar 2012 – 15:54

Cloud Computing Paradigm or SymptomBy Lutz Schubert, Head of the Intelligent Service Infrastructures Department, High Performance Computing Center Stuttgart (HLRS)

Participating in current debates about “the cloud” feels a bit like scholars of Plato must have felt when faced with the concept of the “idea”: “what is it that makes a cloud?” “does it have to be elastic?” “is it only Software-as-a-Service?”. One generally faces two opposing positions to this, from “the cloud is the next stage in the evolution of the internet” [HP], to “all it is, is a computer attached to a network” [Larry Ellison]. There is however no question that the phenomenon called “clouds” exists and it certainly has an impact on the current IT landscape.

What has changed?

It is often claimed that clouds brought on a complete new way of service and resource provisioning, but that is not true as such: managed server farms, grids, thin clients – we’ve seen it all before. What really has changed is neither concept nor technology, but a growing demand from customers to access content, services and full-blown applications over the internet. As the demand exceeded the provisioning and the number of requests over time became unpredictable, a solution was required that extended the elastic capabilities of server farms and grids: the cloud.

The real change therefore was not constituted by the cloud, but by a growing change in customer and user behaviour. In a modern world built on quick satisfaction of needs, quality becomes more and more secondary to speed. What is more, with modern technology, quality can be produced quicker, easier and cheaper – many would argue that the “depth” of the production suffers this way, but this argument is as old as human civilisation. Flickr is just one demonstration of this: even amateurs can make pictures that exceed the expectations towards professionals.

As such, App providers become a competition to professional software houses, low-budget movies compete with Hollywood, free podcasts and ebooks replace books etc. Our culture has reached an educational point where sufficient knowledge and technological support exists to reduce the gap to specialisation. Gone are the days where professionalism paid. In its stead comes a model where the true payment lies in the mass of consumption. Call it cloud, internet of services, or future internet:

What counts is the impact

We take this provisioning model for granted on so many levels and thereby tend to overlook all the technical and economic problems that arise with it. Particular attention must thereby be paid to the factors of cost, investment and profit: non-professional providers are often happy to contribute their content for almost free, thus destroying all classical cost models. Much content is funded through advertisements, but their value constantly decreases. Profit can thus only be created by the sheer number of users. This also raises the classical debate about copyright protection and compensating costs for unrightful copies, versus attracting buyers through cheap costs.

Yet though amount is important and reflected by the growth in demand, the actual technical capabilities in this context are limited. Whilst we treat the network as if it could grow with this increasing demand, bandwidth and latency are physically constrained and grow at a much slower rate. The current model of elasticity reacts to the number of demands, but not to their location – an issue that is less problematic for local content.

Community of providers, hosts, consumers and users

Clouds or grids, the main purpose is to integrate external resources, respectively to outsource local capacities, so as to reduce the load on their own infrastructure and to save cost on their management. New models thereby address in particular the needs for service provisioning under dynamic conditions, i.e. where the amount of consumers and users is not known in advance and may change drastically over time. It is therefore not surprising that many companies come up with their own proprietary solutions to meet the specific behavioural needs of the services they offer.

This model is however also highly appropriate for private content provisioning, depending on the rate of interest it creates: hosting of single instances is fairly cheap and as demand grows, payment can compensate for cost – given that an appropriate cost model is found.

The current development on the service market shows one additional major change: not only do enterprises outsource their infrastructure to e.g. the cloud, they also outsource a growing amount of responsibility to the customer himself. Online banking for example puts the effort on the side of the customer, telecommunication wants consumers to host their own local area network etc. This is the only way to cope with the growing size: distribute work and data.

Is this nothing but the final rise of the producing consumer – the Prosumer – who is so often associated with Web 2.0? Filesharing, chatting, social networks etc. all build on a society and even a market over a community of users rather than a central enterprise. There is no single point of data, as there is no single point of computation or in fact consumption. Most users aggregate and even compose these individual services into one composition that, more or less, defines the users and is personalised to their needs. This means that the future internet is much more dispersed than even current cloud providers foresee. What does it look like, this

“Cloud” Ecosystem of the Future

Personalisation, aggregation, composition over a widely dispersed mesh of resources and services become major priorities as the community of “social prosumers” grows. A major trailblazer can be seen in advertisements: they are already personalised according to the user’s public web profile, his device and everything. In the near future, advertisements will aggregate profile information even further to generate personalised stories involving the consumer directly, making him virtually the centre of attention. Obviously, this is not restricted to advertisements, but can be extended to any next generation of content.

The individual in the internet is no longer a passive consumer, neither just a source of data, but also of processing power, innovative services etc. The logical next step from internet, via grid and web2.0 over clouds is therefore a (distributed) managed resource mesh over all connected devices. This implies a wider heterogeneity than apparent at first: not only devices differ, but more importantly, users, services, content, connectivity, operating system, availability etc.

The current approaches to dealing with this are highly restrictive in the sense that they limit the heterogeneity within a manageable range, for example through homogeneous data centre organisation, restriction to specific operating systems etc. The alternative approach subsumes heterogeneity by adding a higher-level stack that implicitly exposes the slowest (sic!) common denominator.

But with modern multicore machines, performance should be no problem? Wrong! When we talk about performance in the web, we mostly refer to the most limiting factor: communication – not compute power.

Another essential paradigm change becomes notable here. We used to treat computation as expensive and communication mostly as free: data was local and provided faster than it can be consumed by the processor. This no longer holds true, instead computational power (number of processors) is cheap, but data communication delays all execution.

This is, however, in stark contrast to how we are used to thinking about and programming computers. Programming models have generally been developed to cater optimally for localised, sequential, single processor machines with few communication concerns. Over decades these models have evolved into highly efficient and intuitively usable languages that now all of a sudden may become obsolete. High Performance Computing is one of the few domains where communication and data exchange have played a major role in the development of programming languages, but even here the new scope of heterogeneity poses issues unconquered so far.

The long-term success of clouds

and of new internet models in general, depends therefore not on user uptake and interest – in fact, it will always originate from there – but instead on whether we manage to move away from the classical Turing and von Neumann model of computing.

Future challenges therefore remain in finding ways to efficiently cope with large scale heterogeneous infrastructures formed by a union of communities and enterprises and injecting on appropriate economic model for its sustainability.