Deepesh Rastogi

2024-11-21 11:03:47
BLOG VIEWS : 959

Why In-Memory Computing Is Essential for Microservices ?




  • Microservices continue to see a surge in adoption and it is easy to see why. For businesses, they bring the promise of faster time-to-market, faster innovation, and greater scale, which are all critical when dealing with the many innovative and disruptive forces in today’s competitive climate. For developers and DevOps teams, microservices offer greater independence and the opportunity to more quickly adopt new programming technologies and languages. For architects, they offer the ability to deliver more dynamic architectures that fit better with other trends such as cloud native, the rise of containers and Kubernetes, serverless and the overall advent of more sophisticated compute-on-demand matched to the microservices pattern.

    Collectively, these technologies are an ideal fit for modern cloud deployment models as well as for adopting disruptive technologies such as machine learning, edge computing and the internet of things.

    Architecture Best Practice Priorities

    With the rise of cloud native, microservices and serverless architectures, the best practices outlined below will help facilitate set-up and serve to contrast what it is like to deal with traditional monolithic applications:

    • Achieve shared state and business context across services
    • Manage complexity and scale in a world where development, DevOps and production are operating independently to support greater agility.
    • Achieve resilience and zero downtime, while providing consistent performance at scale

    This article focuses on how to achieve the above by examining the role of in-memory computing and how it solves these challenges. This is especially the case, because simple read and write acceleration has now moved into an abstraction layer that sits between the growing complexity of microservices technologies and the diversification of cloud data stores and databases.

    Microservices and stateless architectures are designed to ensure high levels of isolation of code dependencies to enable developers and teams to deploy code independently, as well as scale different application functions independently. This has tremendous value for businesses. But, it comes at a price. Greater complexity makes achieving a shared state much more difficult and results in more complex disaster recovery architectures.

    Achieving Shared State

    Distributed in-memory datastores, when combined with stateful event stream processing, allow you compelling tools to solve legacy challenges. In-memory data stores, or data grids, deliver shared state that is scalable, cloud native and easily integrated into modern architectures.

    How is this break-through achieved? By providing a common, distributed memory layer shared across individual services, you can bring the data closer to compute and keep it in memory. Less data moving over the wire reduces latency. When you add data-aware capabilities, this allows that in-memory benefit to be amplified by data locality, so that computations, queries, aggregations and other processing can all occur colocated with the data. These platforms innovate further by having smart client APIs and techniques such as Nearcache to further optimize performance.

    Managing Complexity

    These platforms can also help manage complexity by abstracting away the systems of record from microservice clients. This is achieved by supporting flexible, configurable write-through processing and asynchronously writing out to the systems of record. Often these systems of record may be a mix of SQL and NoSQL databases, and even deployed across a mixture of cloud-based and on-premise. Abstracting this complexity into a common data layer increases the agility and independence of microservice development teams, thus accelerating time-to-market.

    Stateful stream processing enables further isolation of microservices layers by providing continuous ingest into and out of the in-memory data layer, as well as allowing solutions to challenges of transaction processing in a world of microservices and serverless where those individual building blocks don’t deliver the longer running multistep context-aware transaction processing.

    Achieving Resilience

    The more advanced in-memory platforms support high-performance multiregion global architectures. This enables zero-downtime business operations via a high-performance shared memory layer that supports them. This also simplifies scaling up these services to more fully leverage the promise of cloud native and serverless. They also provide features such as automated disaster recovery, zero down-time code deployments (blue-green deployments), rolling product upgrades, as well as tools to integrate these seamlessly into modern cloud DevOps automation tools and new AIOps tools that help monitor these architectures and deliver auto-scaling and autonomous troubleshooting.

    For a concrete example of how these could be employed, imagine having many microservices in an online shopping application These include separate capabilities that power browsing for products, adding and removing items from a shopping cart and so on. More so, each one of these microservices can be somewhat independent from one another. But, some actions like checking out, fulfillment and shipping may require multistep orchestration and some roll-back behavior. Stateful stream processing is very effective at addressing these while the interactions with the UX might be driven by independent microservices. Another aspect to consider is the need for continuous aggregation and calculation to track and maintain “available to promise” information, a process that again requires continuous stateful analytics.

    An added benefit of stateful stream processing is that it can reliably integrate real-time machine learning into the aforementioned analytics pipelines while maintaining high levels of agility and allowing A/B testing and other machine learning operationsOps techniques. As a result, greater disruption from smarter applications, low-latency fraud and anomaly detection, real-time personalization and autonomous customer service or business processes is enabled.

    The Ever-Present Need for Agility

    Never has it been more critical to deliver greater agility to meet the need of businesses to deliver always-on, touchless and connected experiences to retain customers and protect employees. Additionally, the rollout of 5G will greatly accelerate the shift to touchless and adoption of technologies such as augmented reality, edge computing and Internet of things. The keys to success in this new normal will be for businesses to combine microservices, cloud-native and serverless architectures with in-memory computing, data grids and stateful stream processing.

     

    By John DesJardins on thenewstack.io