21 C
New York
Wednesday, June 18, 2025

How REA Group approaches Amazon MSK cluster capability planning


This submit was written by Eunice Aguilar and Francisco Rodera from REA Group.

Enterprises that must share and entry giant quantities of knowledge throughout a number of domains and providers must construct a cloud infrastructure that scales as want modifications. REA Group, a digital enterprise that makes a speciality of actual property property, solved this drawback utilizing Amazon Managed Streaming for Apache Kafka (Amazon MSK) and an information streaming platform known as Hydro.

REA Group’s workforce of greater than 3,000 individuals is guided by our goal: to vary the way in which the world experiences property. We assist individuals with all facets of their property expertise—not simply shopping for, promoting, and renting—by way of the richest content material, knowledge and insights, valuation estimates, and residential financing options. We ship unparalleled worth to our clients, Australia’s actual property brokers, by offering entry to the most important and most engaged viewers of property seekers.

To realize this, the totally different technical merchandise inside the firm recurrently want to maneuver knowledge throughout domains and providers effectively and reliably.

Throughout the Information Platform workforce, we now have constructed an information streaming platform known as Hydro to supply this functionality throughout the entire group. Hydro is powered by Amazon MSK and different instruments with which groups can transfer, remodel, and publish knowledge at low latency utilizing event-driven architectures. Such a construction is foundational at REA for constructing microservices and well timed knowledge processing for real-time and batch use circumstances like time-sensitive outbound messaging, personalization, and machine studying (ML).

On this submit, we share our method to MSK cluster capability planning.

The issue

Hydro manages a large-scale Amazon MSK infrastructure by offering configuration abstractions, permitting customers to give attention to delivering worth to REA with out the cognitive overhead of infrastructure administration. As using Hydro grows inside REA, it’s essential to carry out capability planning to fulfill person calls for whereas sustaining optimum efficiency and cost-efficiency.

Hydro makes use of provisioned MSK clusters in growth and manufacturing environments. In every setting, Hydro manages a single MSK cluster that hosts a number of tenants with differing workload necessities. Correct capability planning makes positive the clusters can deal with excessive site visitors and supply all customers with the specified stage of service.

Actual-time streaming is a comparatively new expertise at REA. Many customers aren’t but accustomed to Apache Kafka, and precisely assessing their workload necessities may be difficult. Because the custodians of the Hydro platform, it’s our accountability to discover a approach to carry out capability planning to proactively assess the impression of the person workloads on our clusters.

Targets

Capability planning entails figuring out the suitable dimension and configuration of the cluster primarily based on present and projected workloads, in addition to contemplating components resembling knowledge replication, community bandwidth, and storage capability.

With out correct capability planning, Hydro clusters can grow to be overwhelmed by excessive site visitors and fail to supply customers with the specified stage of service. Subsequently, it’s essential to us to take a position time and sources into capability planning to ensure Hydro clusters can ship the efficiency and availability that trendy functions require.

The capability planning method we observe for Hydro covers three most important areas:

  • The fashions used for the calculation of present and estimated future capability wants, together with the attributes used as variables in them
  • The fashions used to evaluate the approximate anticipated capability required for a brand new Hydro workload becoming a member of the platform
  • The tooling obtainable to operators and custodians to evaluate the historic and present capability consumption of the platform and, primarily based on them, the obtainable headroom

The next diagram exhibits the interplay of capability utilization and the precalculated most utilization.

Though we don’t have this functionality but, the aim is to take this method one step additional sooner or later and predict the approximate useful resource depletion time, as proven within the following diagram.

To verify our digital operations are resilient and environment friendly, we should preserve a complete observability of our present capability utilization. This detailed oversight permits us not solely to grasp the efficiency limits of our current infrastructure, but additionally to determine potential bottlenecks earlier than they impression our providers and customers.

By proactively setting and monitoring well-understood thresholds, we are able to obtain well timed alerts and take vital scaling actions. This method makes positive our infrastructure can meet demand spikes with out compromising on efficiency, finally supporting a seamless person expertise and sustaining the integrity of our system.

Resolution overview

The MSK clusters in Hydro are configured with a PER_TOPIC_PER_BROKER stage of monitoring, which offers metrics on the dealer and subject ranges. These metrics assist us decide the attributes of the cluster utilization successfully.

Nevertheless, it wouldn’t be sensible to show an extreme variety of metrics on our monitoring dashboards as a result of that would result in much less readability and slower insights on the cluster. It’s extra beneficial to decide on essentially the most related metrics for capability planning reasonably than displaying quite a few metrics.

Cluster utilization attributes

Based mostly on the Amazon MSK finest practices tips, we now have recognized a number of key attributes to evaluate the well being of the MSK cluster. These attributes embody the next:

  • In/out throughput
  • CPU utilization
  • Disk area utilization
  • Reminiscence utilization
  • Producer and client latency
  • Producer and client throttling

For extra data on right-sizing your clusters, see Greatest practices for right-sizing your Apache Kafka clusters to optimize efficiency and price, Greatest practices for Commonplace brokers, Monitor CPU utilization, Monitor disk area, and Monitor Apache Kafka reminiscence.

The next desk accommodates the detailed checklist of all of the attributes we use for MSK cluster capability planning in Hydro.

Attribute Identify Attribute Kind Items Feedback
Bytes in Throughput Bytes per second Depends on the combination Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput
Bytes out Throughput Bytes per second Depends on the combination Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput
Shopper latency Latency Milliseconds Excessive or unacceptable latency values often point out person expertise degradation earlier than reaching precise useful resource (for instance, CPU and reminiscence) depletion
CPU utilization Capability limits % CPU person + CPU system Ought to keep below 60%
Disk area utilization Persistent storage Bytes Ought to keep below 85%
Reminiscence utilization Capability limits % Reminiscence in use Ought to keep below 60%
Producer latency Latency Milliseconds Excessive or unacceptable sustained latency values often point out person expertise degradation earlier than reaching precise capability limits or precise useful resource (for instance, CPU or reminiscence) depletion
Throttling Capability limits Milliseconds, bytes, or messages Excessive or unacceptable sustained throttling values point out capability limits are being reached earlier than precise useful resource (for instance, CPU or reminiscence) depletion

By monitoring these attributes, we are able to rapidly consider the efficiency of the clusters as we add extra workloads to the platform. We then match these attributes to the related MSK metrics obtainable.

Cluster capability limits

Through the preliminary capability planning, our MSK clusters weren’t receiving sufficient site visitors to supply us with a transparent thought of their capability limits. To deal with this, we used the AWS efficiency testing framework for Apache Kafka to guage the theoretical efficiency limits. We carried out efficiency and capability assessments on the check MSK clusters that had the identical cluster configurations as our growth and manufacturing clusters. We obtained a extra complete understanding of the cluster’s efficiency by conducting these numerous check eventualities. The next determine exhibits an instance of a check cluster’s efficiency metrics.

To carry out the assessments inside a particular timeframe and funds, we centered on the check eventualities that would effectively measure the cluster’s capability. For example, we carried out assessments that concerned sending high-throughput site visitors to the cluster and creating matters with many partitions.

After each check, we collected the metrics of the check cluster and extracted the utmost values of the important thing cluster utilization attributes. We then consolidated the outcomes and decided essentially the most applicable limits of every attribute. The next screenshot exhibits an instance of the exported check cluster’s efficiency metrics.

Capability monitoring dashboards

As a part of our platform administration course of, we conduct month-to-month operational evaluations to keep up optimum efficiency. This entails analyzing an automatic operational report that covers all of the techniques on the platform. Through the evaluation, we consider the service stage targets (SLOs) primarily based on choose service stage indicators (SLIs) and assess the monitoring alerts triggered from the earlier month. By doing so, we are able to determine any points and take corrective actions.

To help us in conducting the operational evaluations and to supply us with an outline of the cluster’s utilization, we developed a capability monitoring dashboard, as proven within the following screenshot, for every setting. We constructed the dashboard as infrastructure as code (IaC) utilizing the AWS Cloud Improvement Package (AWS CDK). The dashboard is generated and managed robotically as a part of the platform infrastructure, together with the MSK cluster.

By defining the utmost capability limits of the MSK cluster in a configuration file, the boundaries are robotically loaded into the capability dashboard as annotations within the Amazon CloudWatch graph widgets. The capability limits annotations are clearly seen and supply us with a view of the cluster’s capability headroom primarily based on utilization.

We decided the capability limits for throughput, latency, and throttling by way of the efficiency testing. Capability limits of the opposite metrics, resembling CPU, disk area, and reminiscence, are primarily based on the Amazon MSK finest practices tips.

Through the operational evaluations, we proactively assess the capability monitoring dashboards to find out if extra capability must be added to the cluster. This method permits us to determine and handle potential efficiency points earlier than they’ve a major impression on person workloads. It’s a preventative measure reasonably than a reactive response to a efficiency degradation.

Preemptive CloudWatch alarms

We’ve got applied preemptive CloudWatch alarms along with the capability monitoring dashboards. These alarms are configured to alert us earlier than a particular capability metric reaches its threshold, notifying us when the sustained worth reaches 80% of the capability restrict. This technique of monitoring allows us to take speedy motion as a substitute of ready for our month-to-month evaluation cadence.

Worth added by our capability planning method

As operators of the Hydro platform, our method to capability planning has supplied a constant approach to assess how far we’re from the theoretical capability limits of all our clusters, no matter their configuration. Our capability monitoring dashboards are a key observability instrument that we evaluation frequently; they’re additionally helpful whereas troubleshooting efficiency points. They assist us rapidly inform if capability constraints may very well be a possible root reason behind any ongoing points. Because of this we are able to use our present capability planning method and tooling each proactively or reactively, relying on the scenario and wish.

One other good thing about this method is that we calculate the theoretical most utilization values {that a} given cluster with a particular configuration can stand up to from a separate cluster with out impacting any precise customers of the platform. We spin up short-lived MSK clusters by way of our AWS CDK primarily based automation and carry out capability assessments on them. We do that very often to evaluate the impression, if any, that modifications made to the cluster’s configurations have on the recognized capability limits. In keeping with our present suggestions loop, if these newly calculated limits change from the beforehand recognized ones, they’re used to robotically replace our capability dashboards and alarms in CloudWatch.

Future evolution

Hydro is a platform that’s consistently bettering with the introduction of latest options. One in every of these options consists of the flexibility to conveniently create Kafka shopper functions. To satisfy the rising demand, it’s important to remain forward of capability planning. Though the method mentioned right here has served us nicely to this point, it’s on no account the ultimate stage , and there are capabilities that we have to prolong and areas we have to enhance on.

Multi-cluster structure

To help essential workloads, we’re contemplating utilizing a multi-cluster structure utilizing Amazon MSK, which might additionally have an effect on our capability planning. Sooner or later, we plan to profile workloads primarily based on metadata, cross-check them with capability metrics, and place them within the applicable MSK cluster. Along with the prevailing provisioned MSK clusters, we are going to consider how the Amazon MSK Serverless cluster kind can complement our platform structure.

Utilization developments

We’ve got added CloudWatch anomaly detection graphs to our capability monitoring dashboards to trace any uncommon developments. Nevertheless, as a result of the CloudWatch anomaly detection algorithm solely evaluates as much as 2 weeks of metric knowledge, we are going to reassess its usefulness as we onboard extra workloads. Except for figuring out utilization developments, we are going to discover choices to implement an algorithm with predictive capabilities to detect when MSK cluster sources degrade and deplete.

Conclusion

Preliminary capability planning lays a strong basis for future enhancements and offers a protected onboarding course of for workloads. To realize optimum efficiency of our platform, we should guarantee that our capability planning technique evolves in keeping with the platform’s development. Consequently, we preserve an in depth collaboration with AWS to repeatedly develop further options that meet our enterprise wants and are in sync with the Amazon MSK roadmap. This makes positive we keep forward of the curve and might ship the absolute best expertise to our customers.

We suggest all Amazon MSK customers not miss out on maximizing their cluster’s potential and to start out planning their capability. Implementing the methods listed on this submit is a good first step and can result in smoother operations and important financial savings in the long term.


Concerning the Authors

Eunice Aguilar is a Workers Information Engineer at REA. She has labored in software program engineering in numerous industries all through the years and just lately for property knowledge. She’s additionally an advocate for ladies taken with transitioning into tech, together with the well-versed who she takes inspiration from.

Francisco Rodera is a Workers Programs Engineer at REA. He has intensive expertise constructing and working large-scale distributed techniques. His pursuits are automation, observability, and making use of SRE practices to business-critical providers and platforms.

Khizer Naeem is a Technical Account Supervisor at AWS. He focuses on Environment friendly Compute and has a deep ardour for Linux and open-source applied sciences, which he leverages to assist enterprise clients modernize and optimize their cloud workloads.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles