A Technical Deep Dive into Istio Service Mesh

Home / Blog / A Technical Deep Dive into Istio Service Mesh
A Technical Deep Dive into Istio Service Mesh

This is a technologists’ deep dive into Istio service mesh. If you’re new to the technology, check out our executive introduction, Why Istio Service Mesh Could Relieve Your Cloud Modernisation Headaches.

What are the Benefits of Service Mesh?

A service mesh is a cloud-native infrastructure layer used to manage service-to-service communication. It is responsible for the reliable delivery of requests through the complex topology of distributed services that comprise a modern cloud-native application. Behavioural insights and operational control over application service ecosystems are provided, offering a complete solution to satisfy the diverse requirements of a microservice architecture.

In practice, a service mesh is typically implemented as an array of lightweight network proxies attached along a given microservice outside the application code, without the application needing to be aware. This is referred to as a sidecar pattern.

Monolithic to Modularisation to Microservices

Traditionally, applications were built as a single application bundle supporting many different functions within a single codebase. But over the years, systems operators routinely experienced issues related to maintenance, scalability, and extensibility. To resolve maintenance problems, experts started breaking down code into functional modules that implemented logically related functionality as separate, but tightly integrated, units of software. This soon resulted in bottlenecks in the feature development of modules because of tight dependencies. Also, tight integration restricted the flexibility of using another module in place of the first, impairing innovation.

As the needs of enterprises started evolving, mature organisations started transforming traditional applications into more modern application architectures based on microservices. Such services conform to the 12 Factor App paradigm. This enhances the Software-as-a-Service model, enabling services to be deployed independently across multiple remote servers rather than by the traditional single-server deployment model.

Network of Services

Mature organisations tried to achieve all the characteristics of microservices to obtain the full benefits of the architecture. For instance, polyglot development allows for multiple languages and the selection of tech stacks per service. Decentralised governance allows for independent design, build, release, deploy, and data ownership.

In a microservice architecture, all microservices stay connected through a predefined standard set of interfaces that communicate back and forth. As Figure 1 shows, this essentially forms a large mesh of interconnected services.

Microservices as a network of proxies
Figure 1: Microservices as a network of proxies

Microservices Own Their Data

With decentralised governance, microservice architectures enable teams to run autonomously while adhering to defined contracts. They maintain backwards compatibility by following proper versioning techniques and controlling all aspects of the software development lifecycle (SDLC) specific to a single domain including build, test, release, deploy, and most importantly, data.

Microservices owning its data
Figure 2. Microservices owning its data

Maintaining each team’s information autonomy is crucial to enable continuous learning and so they can adapt to changing business environments. Autonomy enables each team to continue growing a culture of expertise and experience that might not be feasible otherwise.

Multiple Points of Entry

Other characteristics, like products not projects and componentisation by services, allow any service to be used on a standalone basis or as part of the larger orchestration of business functionality. Using differentiated inputs, outputs, and configuration allows polymorphic behaviour for multiple entry points.

Multiple entry points to serve different behavior
Figure 3. Multiple entry points to serve different behavior

Isolated Pipelines for Each Service

Decentralised governance and automation enable encapsulated, isolated, end-to-end pipelines for build, release, and deployment of those services.

Microservices have their own isolated pipeline
Figure 4. Microservices have their own isolated pipeline

Microservices Communication is Distributed Computing

Decentralisation of business functionality, control, data, and processes in a microservice architecture is a classic example of distributed computing. Overall functionality is achieved via isolated and self-contained features in an interconnected mesh of services that have their own resources such as compute, memory, disc, input/output, network, and processing.

Distributed computing is more complex and challenging to implement than traditional computing and must be well thought out. With scale, greater challenges are encountered. We must be pragmatic when designing distributed application architectures, considering and addressing the challenges they cause. It’s a similar story for microservices patterns, as Figure 5 illustrates.

Communication among microservices is similar to distributed computing
Figure 5. Communication among microservices is similar to distributed computing

Fallacies of Distributed Computing

Many fallacies can threaten the smooth functionality of microservices. It’s important to understand and accept this, designing solutions around those fallacies rather than ignoring them. Eight common fallacies of distributed systems are covered in detail here, but to summarise, they include: 

  • The network is reliable
  • Latency is zero
  • Bandwidth is infinite
  • Network is secure
  • Topology does not change
  • There is one administrator
  • Transport cost is zero
  • The network is homogeneous.

Unaddressed or incorrectly addressed fallacies can cause cascading failures in the microservice, as shown in Figure 6. These threaten cloud success and slow down operations.

A single failure in a mesh of microservices
Figure 6: A single failure in a mesh of microservices

In an enterprise, dependency and failure in one microservice can cause cascading failures in all dependent microservices, affecting business flow and ultimately, the user experience.

Each microservice’s design and implementation must elegantly resolve all the possible failures caused by such distributed computing fallacies.

Single failure in a microservice causing cascading failure in others
Figure 7. Single failure in a microservice causing cascading failure in others

The failure depicted in Figure 7 is one potential scenario, and there may be multiple reasons for the failure’s occurrence. To resolve the chaos, each team should develop its own custom code/components for each service. Teams may need to implement advanced concepts like rate-limiting, failures and retries, connection timeouts, and circuit breaking solutions to handle failures caused by fallacies. Implementing and testing all these advanced concepts can be very costly, consuming much time and effort.

The Real Cost of Microservices

The true cost of microservices lies in the management of infrastructure and operational complexity. In a mature cloud-native deployment, developing enterprise applications that follow microservices patterns is relatively straightforward at the start. However, challenges and costs escalate as complexity increases. The cost of managing mesh infrastructure in the day-to-day distributed system, which achieves critical business SLAs, also increases in line with complexity.

Important aspects of infrastructure and operations management include:

  • Deployment
  • Delivery
  • API Management
  • Versioning
  • Contracts
  • Scaling/auto-scaling
  • Service discovery
  • Load balancing
  • Routing/adaptive routing
  • Health checking
  • Configuration
  • Circuit breaking
  • Latency tracing
  • Service casual tracing
  • Distributed logging
  • Metrics exposure, collection

Handling Cross-cutting Concerns

Many aspects of infrastructure and operational management must be handled, mostly by the application code through dependency libraries. Consider a very mature organisation with multiple autonomous feature teams, each wanting to use its own choice of programming languages, frameworks, and tools to implement cross-cutting concerns. To support those non-functional requirements, more than 50% of the resultant service package would be comprised of non-application/library code.

The following diagrams indicate how different cross-cutting concerns can be implemented in embedding (a traditional approach encapsulating the solution within the application code) and externalising (a modern approach which does not cause code bloat).

Microservices need a separate library for each cross-cutting concern for each different programming language
Figure 8. Microservices need a separate library for each cross-cutting concern for each different programming language
A microservice embedding cross-cutting concerns
Figure 9. A microservice embedding cross-cutting concerns
Service mesh provides a single, unified declarative solution to address cross-cutting concerns, independent of the programming languages used
Figure 10. Service mesh provides a single, unified declarative solution to address cross-cutting concerns, independent of the programming languages used
View of a microservice in Istio, able to address cross-cutting concerns declaratively and externally (by Envoy proxy/sidecar container)
Figure 11. View of a microservice in Istio, able to address cross-cutting concerns declaratively and externally (by Envoy proxy/sidecar container)

If each autonomous team chooses different programming languages, the number of libraries will be multiplied by the number of concerns to be addressed. As can be seen in Figure 10, all these libraries bloat the microservice with additional code which could have been avoided.

Figure 11 shows those cross-cutting concerns managed by the proxy or a library of a common framework/tool. Although some cross-cutting concerns are already managed by cloud-native tools like Kubernetes and OpenShift, they are tightly integrated into the platform itself. So, there is still a gap to be filled by some framework/tool that is common, declarative, and easy to extend and customise. The answer to such requirements is a service mesh solution, as explained in this CloudCover article outlining how service mesh gives organisations competitive edge.

The transformation of monolithic applications into modern microservices-based applications enhanced with service mesh, increases reliability, reduces costs, and saves time and effort.

You can read more about the benefits and technical capabilities of Istio service mesh in the following articles:

Or contact us if you’d like guidance and support with your own service mesh implementation.

Kamesh is seasoned technology professional focused on solving real world problems of enterprises using modern, innovative architecture having high emphasis on zero-trust security. He has led some of the largest digital transformation projects end-to-end from solution to delivery. He keeps a high focus on customer orientation, security, innovation & quality.