-7.9 C
New York
Monday, February 9, 2026

Optimize Amazon EMR runtime for Apache Spark with EMR S3A


With the Amazon EMR 7.10 runtime, Amazon EMR has launched EMR S3A, an improved implementation of the open supply S3A file system connector. This enhanced connector is now mechanically set because the default S3 file system connector for Amazon EMR deployment choices, together with Amazon EMR on EC2, Amazon EMR Serverless, Amazon EMR on Amazon EKS, and Amazon EMR on AWS Outposts, sustaining full API compatibility with open supply Apache Spark.

Within the Amazon EMR 7.10 runtime for Apache Spark, the EMR S3A connector reveals efficiency akin to EMRFS for learn workloads, as demonstrated by TPC-DS question benchmark. The connector’s most vital efficiency good points are evident in write operations, with a 7% enchancment in static partition overwrites and a 215% enchancment for dynamic partition overwrites when in comparison with EMRFS. On this put up, we showcase the improved learn and write efficiency benefits of utilizing Amazon EMR 7.10.0 runtime for Apache Spark with EMR S3A as in comparison with EMRFS and the open supply S3A file system connector.

Learn workload efficiency comparability

To guage the learn efficiency, we used a take a look at surroundings primarily based on Amazon EMR runtime model 7.10.0 operating Spark 3.5.5 and Hadoop 3.4.1. Our testing infrastructure featured an Amazon Elastic Compute Cloud (Amazon EC2) cluster comprised of 9 r5d.4xlarge situations. The first node has 16 vCPU and 128 GB reminiscence, and the eight core nodes have a complete of 128 vCPU and 1024 GB reminiscence.

The efficiency analysis was performed utilizing a complete testing methodology designed to supply correct and significant outcomes. For the supply information, we selected the three TB scale issue, which incorporates 17.7 billion data, roughly 924 GB of compressed information partitioned in Parquet file format. The setup directions and technical particulars could be discovered within the GitHub repository. We used Spark’s in-memory information catalog to retailer metadata for TPC-DS databases and tables.

To provide a good and correct comparability between EMR S3A vs. EMRFS and open supply S3A implementations, we carried out a three-phase testing strategy:

  • Part 1: Baseline efficiency:
    • Established a baseline utilizing default Amazon EMR configuration with EMR’s S3A connector
    • Created a reference level for subsequent comparisons
  • Part 2: EMRFS evaluation:
    • Maintained the default file system as EMRFS
    • Preserved different configuration settings
  • Part 3: Open supply S3A testing:
    • Modified solely the hadoop-aws.jar file by changing it with the open supply Hadoop S3A 3.4.1 model
    • Maintained similar configurations throughout different elements

This managed testing surroundings was essential for our analysis for the next causes:

  • We may isolate the efficiency affect particularly to the S3A connector implementation
  • It eliminated potential variables that might skew the outcomes
  • It supplied correct measurements of efficiency enhancements between Amazon’s S3A implementation and the open supply different

Check execution and outcomes

All through the testing course of, we maintained consistency in take a look at circumstances and configurations, ensuring any noticed efficiency variations might be straight attributed to the S3A connector implementation variations. A complete of 104 SparkSQL queries have been run in 10 iterations sequentially, and a median of every question’s runtime in these 10 iterations was used for comparability. The common of the ten iterations’ runtime on the Amazon EMR 7.10 runtime for Apache Spark with EMR S3A was 1116.87 seconds, which is 1.08 occasions quicker than open supply S3A and comparable with EMRFS. The next determine illustrates the full runtime in seconds.

The next desk summarizes the metrics.

Metric OSS S3A EMRFS EMR S3A
Common runtime in seconds 1208.26 1129.64 1116.87
Geometric imply over queries in seconds 7.63 7.09 6.99
Whole value * $6.53 $6.40 $6.15

*Detailed value estimates are mentioned later on this put up.

The next chart demonstrates the per-query efficiency enchancment of EMR S3A relative to open supply S3A on the Amazon EMR 7.10 runtime for Apache Spark. The extent of the speedup varies from one question to a different, with the quickest as much as 1.51 occasions quicker for q3, with Amazon EMR S3A outperforming open supply S3A. The horizontal axis arranges the TPC-DS 3TB benchmark queries in descending order primarily based on the efficiency enchancment seen with Amazon EMR, and the vertical axis depicts the magnitude of this speedup as a ratio.

Learn value comparability

Our benchmark outputs the full runtime and geometric imply figures to measure the Spark runtime efficiency. The price metric can present us with further insights. Price estimates are computed utilizing the next formulation. They think about Amazon EC2, Amazon Elastic Block Retailer (Amazon EBS), and Amazon EMR prices, however don’t embrace Amazon Easy Storage Service (Amazon S3) GET and PUT prices.

  • Amazon EC2 value (embrace SSD value) = variety of situations * r5d.4xlarge hourly price * job runtime in hours
    • r5d.4xlarge hourly price = $1.152 per hour
  • Root Amazon EBS value = variety of situations * Amazon EBS per GB-hourly price * root EBS quantity dimension * job runtime in hours
  • Amazon EMR value = variety of situations * r5d.4xlarge Amazon EMR value * job runtime in hours
    • r5d.4xlarge Amazon EMR value = $0.27 per hour
  • Whole value = Amazon EC2 value + root Amazon EBS value + Amazon EMR value

The next desk summarizes these prices.

Metric EMRFS EMR S3A OSS S3A
Runtime in hours 0.5 0.48 0.51
Variety of EC2 situations 9 9 9
Amazon EBS dimension 0 gb 0 gb 0 gb
Amazon EC2 value $5.18 $4.98 $5.29
Amazon EBS value $0.00 $0.00 $0.00
Amazon EMR value $1.22 $1.17 $1.24
Whole value $6.40 $6.15 $6.53
Price financial savings Baseline EMR S3A is 1.04 occasions higher than EMRFS EMR S3A is 1.06 occasions higher than OSS S3A

Write workload efficiency comparability

We performed benchmark checks to evaluate the write efficiency of the Amazon EMR 7.10 runtime for Apache Spark.

Static desk/partition overwrite

We evaluated the static desk/partition overwrite write efficiency of the completely different file system by executing the next INSERT OVERWRITE Spark SQL question. The SELECT * FROM vary(...) clause generated information at execution time. This produced roughly 15 GB of knowledge throughout precisely 100 Parquet recordsdata in Amazon S3.

SET rows=4e9; -- 4 Billion
SET partitions=100;
INSERT OVERWRITE DIRECTORY 's3://${bucket}/perf-test/${trial_id}'
USING PARQUET SELECT * FROM vary(0, ${rows}, 1, ${partitions});

The take a look at surroundings was configured as follows:

  • EMR cluster with emr-7.10.0 launch label
  • Single m5d.2xlarge occasion (main group)
  • Eight m5d.2xlarge situations (core group)
  • S3 bucket in the identical AWS Area because the EMR cluster
  • The trial_id property used a UUID generator to keep away from battle between take a look at runs

Outcomes

After operating 10 trials for every file system, we captured and summarized question runtimes within the following chart. Whereas EMR S3A averaged solely 26.4 seconds, the EMRFS and open supply S3A averaged 28.4 seconds and 31.4 seconds—a 1.07 occasions and 1.19 occasions enchancment, respectively.

Dynamic partition overwrite

We additionally evaluated the write efficiency by executing the next INSERT OVERWRITE dynamic partition Spark SQL question, which joins TPC-DS 3TB partitioned Parquet information of the desk web_sales and date_dim tables, which inserts roughly 2,100 partitions, the place every partition incorporates one Parquet file with a mixed dimension of roughly 31.2 GB in Amazon S3.

SET spark.sql.sources.partitionOverwriteMode=DYNAMIC;
INSERT OVERWRITE TABLE  PARTITION(wsdt_year,wsdt_month, wsdt_day) 
SELECT ws_order_number,ws_quantity,ws_list_price,ws_sales_price,
ws_net_paid_inc_ship_tax,ws_net_profit,dt.d_year as wsdt_year,dt.d_moy 
as wsdt_month,dt.d_dom as wsdt_day FROM web_sales, date_dim dt 
WHERE ws_sold_date_sk = d_date_sk;

The take a look at surroundings was configured as follows:

  • EMR cluster with emr-7.10.0 launch label
  • Single r5d.4xlarge occasion (grasp group)
  • 5 r5d.4xlarge situations (core group)
  • Roughly 2,100 partitions with one Parquet file every
  • Mixed dimension of roughly 31.2 GB in Amazon S3

Outcomes

After operating 10 trials for every file system, we captured and summarized question runtimes within the following chart. Whereas EMR S3A averaged solely 90.9 seconds, the EMRFS and open supply S3A averaged 286.4 seconds and 1,438.5 seconds—a 3.15 occasions and 15.82 occasions enchancment, respectively.

Abstract

Amazon EMR constantly enhances its Apache Spark runtime and S3A connector, delivering steady efficiency enhancements that assist huge information clients execute analytics workloads extra cost-effectively. Past efficiency good points, the strategic shift to S3A introduces crucial benefits, together with enhanced standardization, improved cross-platform portability, and sturdy community-driven help—all whereas sustaining or surpassing the efficiency benchmarks established by the earlier EMRFS implementation.

We suggest that you just keep up-to-date with the most recent Amazon EMR launch to reap the benefits of the most recent efficiency and have advantages. Subscribe to the AWS Large Information Weblog’s RSS feed to study extra concerning the Amazon EMR runtime for Apache Spark, configuration greatest practices, and tuning recommendation.


In regards to the authors

Giovanni Matteo Fumarola

Giovanni Matteo Fumarola

Giovanni is the Senior Supervisor for the Amazon EMR Spark and Iceberg group. He’s an Apache Hadoop Committer and PMC member. He has been focusing within the huge information analytics house since 2013.

Sushil Kumar Shivashankar

Sushil Kumar Shivashankar

Sushil is the Engineering Supervisor for the Amazon EMR Hadoop and Flink staff at Amazon Net Companies. With a give attention to huge information analytics since 2014, he leads improvement, optimizations, and development methods for Hadoop and Flink enterprise in Amazon EMR.

Narayanan Venkateswaran

Narayanan Venkateswaran

Narayanan is a Senior Software program Growth Engineer within the Amazon EMR group. He works on creating Hadoop elements in Amazon EMR. He has over 20 years of labor expertise within the business throughout a number of corporations, together with Solar Microsystems, Microsoft, Amazon, and Oracle. Narayanan additionally holds a PhD in databases with a give attention to horizontal scalability in relational shops.

Syed Shameerur Rahman

Syed Shameerur Rahman

Syed is a Software program Growth Engineer at Amazon EMR. He’s fascinated about extremely scalable, distributed computing. He’s an energetic contributor of open supply tasks like Apache Hive, Apache Tez, Apache ORC, and Apache Hadoop, and has contributed necessary options and optimizations. Throughout his free time, he enjoys exploring new locations and making an attempt new meals.

Rajarshi Sarkar

Rajarshi Sarkar

Rajarshi is a Software program Growth Engineer at Amazon EMR. He works on cutting-edge options of Amazon EMR and can also be concerned in open supply tasks similar to Apache Hive, Iceberg, Trino, and Hadoop. In his spare time, he likes to journey, watch films, and hang around with associates.

Related Articles

Latest Articles