Optimising IT

In the next series of blogs, we will be exploring optimising IT – how organisations can optimise their IT through services integration. Enterprises today need to manage a perfect balance of IT economics, application performance and security controls to meet their business application needs. This is why today’s IT solutions drive greater inter-operability requirements across multiple cloud and traditional data centre environments.

To achieve this, enterprises need to consider optimising IT by performing services integration exercises of cloud, internet service and in-house provided solutions.

The best solution will ultimately allow applications and components to securely inter-operate between public and private clouds and allow applications to be portable across these environments.

Over time these solutions must scale to deliver their respective benefits. Applications need to be able to move easily between environments, as where applications are hosted today might not be the best place for it tomorrow. For example, when:

  • Test and Development objectives become production objectives with different service criteria.
  • Public and private cloud experiences vary based on geographical and security constraints. The network experience for example will vary due to the physical Cloud location and network throughput from the end user and the network they connect from.

Application mobility must therefore be a governing principle across the hybrid cloud, whilst still dealing with the constraints of running and integrating legacy applications/platforms in traditional data centres.

Building the Ideal IT Services Integration Model

The key principle to follow when building a hybrid cloud is to start by choosing the right approach for application hosting and for building the underlying physical infrastructure that supports it.

This ‘application first’ hosting policy is based on the economic considerations combined with the service/security constraints dictated by the business-critical applications. Using this model, we can see that:

  • Private cloud is ideal for predicable workloads and custom SLAs for critical business applications, for example; data backup and internal databases. Then to plan accordingly to add resources as needed to accommodate expected growth.
  • Public cloud is better where greater elasticity is needed for unpredictable workloads, for example; digital and IoT applications, where applications can be standardised to run on commoditised platforms with common SLAs.
  • Traditional data centre or co-location environment for when there is no cloud migration option. i.e. when legacy IT compute and storage platforms are running key business applications.

To join these cloud and data centre environments together, an end-to-end architecture is formed based on a service catalogue of desired features, that draws upon all the IT features that will be required to host the applications. This catalogue will include all the resources in the legacy data centre, the various cloud options, the network, the security mechanisms and the digital platforms required to access the applications.

Working out which devices users connect and which digital platforms they are using determines the security segregation model and the resulting security zones that cloud will need to provide. For example, if a large number of users connect via 3rd party platforms/internet connections it may be better to treat all users as ‘untrusted’ to preserve security. This effectively forces all users to connect via application gateways or user VPNs instead of connecting directly to application servers.

Network Throughput

The next consideration in the architecture is the network throughput. Network latency will typically have two considerations:

  • The physical location for the hosted application services and the way network latency impacts remote users at their various geographical locations.
  • Latency of servers operating within and between the cloud or data centre locations. This will include traffic between the gateway and application services as well as server replication traffic between locations.

Predictable, secure performance provided by the network is therefore essential. WAN acceleration and application load balancing can help offset some of the performance issues over distance and help manage greater levels of resilience but careful planning on placement of these devices will be needed for it to work correctly.

Buying predictable bandwidth helps, but it costs more than a ‘best efforts’ internet connection. So much so, that the costs of the bandwidth need to be offset by the added risk of using internet VPN wherever possible for remote offices and users. Otherwise the cost of the network bandwidth might further prohibit public cloud introduction and expansion.

Once the underlying physical infrastructure is formed, then the procurement and in-house creation of the pertinent ‘as a service’ models can begin. At this point the various options for ongoing deployment, management and integration as an “overlay” becomes the most challenging aspect of any hybrid cloud operation.

You can read the full Optimising IT article here or contact us for further information. Alternatively read part II of our Optimising IT blog series.