-7.9 C
New York
Monday, February 9, 2026

Unlock self-serve streaming SQL with Amazon Managed Service for Apache Flink


This publish is co-written with Gal Krispel from Riskified.

Riskified is an ecommerce fraud prevention and threat administration platform that helps companies optimize on-line transactions by distinguishing authentic clients from fraudulent ones.

Utilizing synthetic intelligence and machine studying (AI/ML), Riskified analyzes real-time transaction information to detect and forestall fraud whereas maximizing transaction approval charges. The platform gives a chargeback assure, defending retailers from losses attributable to fraudulent transactions. Riskified’s options embrace account safety, coverage abuse prevention, and chargeback administration software program, making it a complete instrument for lowering threat and enhancing buyer expertise. Companies throughout numerous industries, together with retail, journey, and digital items, use Riskified to extend income whereas minimizing fraud-related losses. Riskified’s core enterprise of real-time fraud prevention makes low-latency streaming applied sciences a elementary a part of its answer.

Companies usually can’t afford to attend for batch processing to make essential selections. With real-time information streaming applied sciences like Apache Flink, Apache Spark, and Apache Kafka Streams, organizations can react immediately to rising tendencies, detect anomalies, and improve buyer experiences. These applied sciences are highly effective processing engines that carry out analytical operations at scale. Nonetheless, unlocking the complete potential of streaming information usually requires advanced engineering efforts, limiting accessibility for analysts and enterprise customers.

Streaming pipelines are in excessive demand from Riskified’s Engineering division. Subsequently, a user-friendly interface for creating streaming pipelines is a essential function to extend analytical precision for detecting fraudulent transactions.

On this publish, we current Riskified’s journey towards enabling self-service streaming SQL pipelines. We stroll by the motivations behind the shift from Confluent ksqlDB to Apache Flink, the structure Riskified constructed utilizing Amazon Managed Service for Apache Flink, the technical challenges they confronted, and the options that helped them make streaming accessible, scalable, and production-ready.

Utilizing SQL to create streaming pipelines

Clients have a spread of open supply information processing applied sciences to select from, akin to Flink, Spark, ksqlDB, and RisingWave. Every platform provides a streaming API for information processing. SQL streaming jobs provide a strong and intuitive approach to course of real-time information with minimal complexity. These pipelines use SQL, a broadly recognized and declarative language, to carry out real-time transformations, filtering, aggregations, and joins in steady information streams.

As an example the ability of streaming SQL in ecommerce fraud prevention, take into account the idea of velocity checks, that are a essential fraud detection sample. Velocity checks are a kind of safety measure used to detect uncommon or speedy exercise by monitoring the frequency and quantity of particular actions inside a given timeframe. These checks assist establish potential fraud or abuse by analyzing repeated behaviors that deviate from regular consumer patterns. Frequent examples embrace detecting a number of transactions from the identical IP tackle in a short while span, monitoring bursts of account creation makes an attempt, or monitoring the repeated use of a single fee technique throughout totally different accounts.

Use case: Riskified’s velocity checks

Riskified carried out a real-time velocity verify utilizing streaming SQL to observe buying habits based mostly on consumer identifier.

On this setup, transaction information is constantly streamed by a Kafka subject. Every message accommodates consumer agent data originating from the browser, together with the uncooked transaction information. Streaming SQL queries are used to combination the variety of transactions originating from a single consumer identifier inside brief time home windows.

For instance, if the variety of transactions from a given consumer identifier exceeds a sure threshold inside a 10-second interval, this may sign fraudulent exercise. When that threshold is breached, the system can routinely flag or block the transactions earlier than they’re accomplished. The next determine and accompanying code present a simplified instance of the streaming SQL question used to detect this habits.

SELECT userIdentifier,TUMBLE_START(createdAt, INTERVAL '10' SECONDS) 
  AS windowStart,TUMBLE_END(createdAt, INTERVAL '10' SECONDS) 
  AS windowEnd, COUNT(*) AS paymentAttempts
FROM transactions
  WINDOW TUMBLING (SIZE 10 SECONDS)
GROUP BY userIdentifier;

Though defining SQL queries over static datasets may seem simple, creating and sustaining sturdy streaming functions introduces distinctive challenges. Conventional SQL operates on bounded datasets, that are finite collections of knowledge saved in tables. In distinction, streaming SQL is designed to course of steady, unbounded information streams resembling the SQL syntax.

To handle these challenges at scale and make streaming job creation accessible to engineering groups, Riskified carried out a self-serve answer based mostly on Confluent ksqlDB, utilizing its SQL interface and built-in Kafka integration. Engineers might outline and deploy streaming pipelines utilizing SQL, chaining ksqlDB streams from supply to sink. The system supported each stateless and stateful processing straight on Kafka subjects, with Avro schemas used to outline the construction of streaming information.

Though ksqlDB supplied a quick and approachable place to begin, it will definitely revealed a number of limitations. These included challenges with schema evolution, difficulties in managing compute assets, and the absence of an abstraction for managing pipelines as a cohesive unit. Consequently, Riskified started exploring different applied sciences that would higher help its increasing streaming use instances. The next sections define these challenges in additional element.

Evolving the stream processing structure

In evaluating options, Riskified centered on applied sciences that would tackle the particular calls for of fraud detection whereas preserving the simplicity that made the unique strategy interesting. The crew encountered the next challenges in sustaining the earlier answer:

  • Schemas are managed in Confluent Schema Registry, and the message format is Avro with FULL compatibility mode enforced. Schemas are continually evolving based on enterprise necessities. They’re model managed utilizing Git with a strict steady integration and steady supply (CI/CD) pipeline. As schemas grew extra advanced, ksqlDB’s strategy to schema evolution didn’t routinely incorporate newly added fields. This habits required dropping streams and recreating them so as to add new fields as a substitute of simply restarting the appliance to include new fields. This strategy brought about inconsistencies with offset administration as a result of stream’s tear-down.
  • ksqlDB enforces a TopicNameStrategy schema registration technique, which gives 1:1 schema-to-topic coupling. This implies the precise schema definition must be registered a number of occasions, one time for every subject it’s used for. Riskified’s schema registry deployment makes use of RecordNameStrategy for schema registration. It’s an environment friendly schema registry technique that permits for sharing schemas throughout a number of subjects, storing fewer schemas, and lowering registry administration overhead. Having combined methods within the schema registry brought about errors with Kafka client purchasers trying to decode messages, as a result of the shopper implementation anticipated a RecordNameStrategy based on Riskified’s customary.
  • ksqlDB internally registers schema definitions in particular methods the place fields are interpreted as nullable, and Avro Enum sorts are transformed to Strings. This habits brought about deserialization errors when trying emigrate native Kafka client functions to make use of the ksqlDB output subject. Riskified’s code base makes use of the Scala programming language, the place non-obligatory fields within the schema are interpreted as Choice. Reworking each area as non-obligatory within the schema definition required heavy refactoring, treating all Enum fields as Strings, and dealing with the Choice information kind for each area that requires secure dealing with. This cascading impact made the migration course of extra concerned, requiring extra time and assets to attain a easy transition.

Managing useful resource competition in ksqlDB streaming workloads

ksqlDB queries are compiled right into a Kafka Streams topology. The question definition defines the topology’s habits.

Streaming question assets are shared somewhat than remoted. This strategy sometimes results in the overallocation of cluster assets. Its duties are distributed throughout nodes in a ksqlDB cluster. This structure means processing duties with no useful resource isolation, and a selected process can impression different duties operating on the identical node.

Useful resource competition between duties on the identical node is widespread in a production-intensive setting when utilizing a cluster structure answer. Operation groups usually fine-tune cluster configurations to keep up acceptable efficiency, continuously mitigating points by over-provisioning cluster nodes.

Challenges with ksqlDB pipelines

A ksqlDB pipeline is a sequence of particular person streams and lacks flow-level abstraction. Think about a posh pipeline the place a client publishes to a number of subjects. In ksqlDB, every subject (each enter and output) have to be managed as a separate stream abstraction. Nonetheless, there isn’t a high-level abstraction to symbolize a whole pipeline that chains these streams collectively. Consequently, engineering groups should manually assemble particular person streams right into a cohesive information circulation, with out built-in help for managing them as a single, full pipeline.

This architectural strategy notably impacts operational duties. Troubleshooting requires inspecting every stream individually, making it troublesome to observe and keep pipelines that comprise dozens of interconnected streams. When points happen, the well being of every stream must be checked individually, with no logical information circulation part to assist perceive the relationships between streams or their position within the total pipeline. The absence of a unified view of the information circulation considerably elevated operational complexity.

Flink instead

Riskified started exploring options for its streaming platform. The necessities have been clear: a powerful processing expertise that mixes a wealthy low-level API and a streaming SQL engine, backed by a powerful open supply neighborhood, confirmed to carry out in essentially the most demanding manufacturing environments.

In contrast to the earlier answer, which supported solely Kafka-to-Kafka integration, Flink provides an array of connectors for numerous databases and Streaming platforms. It was shortly acknowledged that Flink had the potential to deal with advanced streaming use instances.

Flink provides a number of deployment choices, together with standalone clusters, native Kubernetes deployments utilizing operators, and Hadoop YARN clusters. For enterprises in search of a totally managed possibility, cloud suppliers like AWS provide managed Flink companies that assist alleviate operational overhead, akin to Managed Service for Apache Flink.

Advantages of utilizing Managed Service for Apache Flink

Riskified determined to implement an answer utilizing Managed Service for Apache Flink. This selection supplied a number of key benefits:

  • It provides a fast and dependable approach to run Flink functions and reduces the operational overhead of independently managing the infrastructure.
  • Managed Service for Apache Flink gives true job isolation by operating every streaming software in its devoted cluster. This implies you may handle assets individually for every job and scale back the chance of heavy streaming jobs inflicting useful resource hunger for different operating jobs.
  • It provides built-in monitoring utilizing Amazon CloudWatch metrics, software state backup with managed snapshots, and automated scaling.
  • AWS provides complete documentation and sensible examples to assist speed up the implementation course of.

With these options, Riskified might concentrate on what actually issues—getting nearer to the enterprise objective and beginning to write functions.

Utilizing Flink’s streaming SQL engine

Builders can use Flink to construct advanced and scalable streaming functions, however Riskified noticed it as greater than only a instrument for specialists. They needed to democratize the ability of Flink right into a instrument for all the group, to unravel advanced enterprise challenges involving real-time analytics necessities while not having a devoted information skilled.

To exchange their earlier answer, they envisioned sustaining a “construct as soon as, deploy many” software, which encapsulates the complexity of the Flink programming and permits the customers to concentrate on the SQL processing logic.

Kafka was maintained because the enter and output expertise for the preliminary migration use case, which has similarities to the ksqlDB setup. They designed a single, versatile Flink software the place end-users can modify the enter subjects, SQL processing logic, and output locations by runtime properties. Though ksqlDB primarily focuses on Kafka integration, Flink’s in depth connector ecosystem permits it to develop to numerous information sources and locations in future phases.

Managed Service for Apache Flink gives a versatile approach to configure streaming functions with out modifying their code. By utilizing runtime parameters, you may change the appliance’s habits with out modifying its supply code.

Utilizing Managed Service for Apache Flink for this strategy consists of the next steps:

  1. Apply parameters for the enter/output Kafka subject, a SQL question, and the enter/output schema ID (assuming you’re utilizing Confluent Schema Registry).
  2. Use AvroSchemaConverter to transform an Avro schema right into a Flink desk.
  3. Apply the SQL processing logic and save the output as a view.
  4. Sink the view outcomes into Kafka.

The next diagram illustrates this workflow.
Streaming SQL system diagram

Performing Flink SQL question compilation with out a Flink runtime setting

Offering end-users with important management to outline their pipelines makes it essential to confirm the SQL question outlined by the consumer earlier than deployment. This validation prevents failed or hanging jobs that would eat pointless assets and incur pointless prices.

A key problem was validating Flink SQL queries with out deploying the complete Flink runtime. After investigating Flink’s SQL implementation, Riskified found its dependency on Apache Calcite – a dynamic information administration framework that handles SQL parsing, optimization, and question planning independently of knowledge storage. This perception enabled utilizing Calcite straight for question validation earlier than job deployment.

You need to understand how the information is structured to validate a Flink SQL question on a streaming supply like a Kafka subject. In any other case, surprising errors may happen when trying to question the streaming supply. Though an anticipated schema is used with relational databases, it’s not enforced for streaming sources.

Schemas assure a deterministic construction for the information saved in a Kafka subject when utilizing a schema registry. A schema may be materialized right into a Calcite desk that defines how information is structured within the Kafka subject. It permits inferring desk constructions straight from schemas (on this case, Avro format was used), enabling thorough field-level validation, together with kind checking and area existence, all earlier than job deployment. This desk can later be used to validate the SQL question.

The next code is an instance of supporting primary area sorts validation utilizing Calcite’s AbstractTable:

public class FlinkValidator {
    public static void validateSQL(String sqlQuery, Schema avroSchema) throws Exception {
        SqlParser.Config sqlConfig = SqlParser.config()
                .withCaseSensitive(true);
        SqlParser sqlParser = SqlParser.create(sqlQuery, sqlConfig);
        SqlNode parsedQuery = sqlParser.parseQuery();
        RelDataTypeFactory typeFactory = new SqlTypeFactoryImpl(RelDataTypeFactory.DEFAULT);
        CalciteSchema rootSchema = createSchemaWithAvro(avroSchema);
        SqlValidator validator = SqlValidatorUtil.newValidator(
                Frameworks.newConfigBuilder().construct().getOperatorTable(),
                rootSchema.createCatalogReader(Collections.emptyList(), typeFactory),
                typeFactory,
                SqlValidator.Config.DEFAULT
        );
        validator.validate(parsedQuery);
    }
    non-public static CalciteSchema createSchemaWithAvro(Schema avroSchema) {
        CalciteSchema rootSchema = CalciteSchema.createRootSchema(true);
        rootSchema.add("TABLE", new SimpleAvroTable(avroSchema));
        return rootSchema;
    }
    non-public static class SimpleAvroTable extends org.apache.calcite.schema.impl.AbstractTable {
        non-public last Schema avroSchema;
        public SimpleAvroTable(Schema avroSchema) {
            this.avroSchema = avroSchema;
        }
        @Override
        public RelDataType getRowType(RelDataTypeFactory typeFactory) {
            RelDataTypeFactory.Builder builder = typeFactory.builder();
            for (Schema.Area area : avroSchema.getFields()) {
                builder.add(area.title(), convertAvroType(area.schema(), typeFactory));
            }
            return builder.construct();
        }
        non-public RelDataType convertAvroType(Schema schema, RelDataTypeFactory typeFactory) {
            change (schema.getType()) {
                case STRING:
                    return typeFactory.createSqlType(SqlTypeName.VARCHAR);
                case INT:
                    return typeFactory.createSqlType(SqlTypeName.INTEGER);
                default:
                    return typeFactory.createSqlType(SqlTypeName.ANY);
            }
        }
    }
}

You may combine this validation strategy as an intermediate step earlier than creating the appliance. You may create a streaming job programmatically with the AWS SDK, AWS Command Line Interface (AWS CLI), or Terraform. The validation happens earlier than submitting the streaming job.

Flink SQL and Confluent Avro information kind mapping limitation

Flink gives a number of APIs designed for various ranges of abstraction and consumer experience:

  • Flink SQL sits on the highest degree, permitting customers to precise information transformations utilizing acquainted SQL syntax, which is good for analysts and groups snug with relational ideas.
  • The Desk API provides the same strategy however is embedded in Java or Python, enabling type-safe and extra programmatic expressions.
  • For extra management, the DataStream API exposes low-level constructs to handle occasion time, stateful operations, and complicated occasion processing.
  • On the most granular degree, the ProcessFunction API gives full entry to Flink’s runtime options. It’s appropriate for superior use instances that demand detailed management over state and processing habits.

Riskified initially used the Desk API to outline streaming transformations. Nonetheless, when deploying their first Flink job to a staging setting, they encountered serialization errors associated to the avro-confluent library and Desk API. Riskified’s schemas rely closely on Avro Enum sorts, which the avro-confluent integration doesn’t absolutely help. Consequently, Enum fields have been transformed to Strings, resulting in mismatches throughout serialization and errors when trying to sink processed information again to Kafka utilizing Flink’s Desk API.

Riskified developed an alternate strategy to beat the Enum serialization limitations whereas sustaining schema necessities. They found that Flink’s DataStream API might appropriately deal with Confluent’s Avro data serialization with Enum fields, in contrast to the Desk API. They carried out a hybrid answer combining each APIs as a result of the pipeline solely required SQL processing on the supply Kafka subject. It may well sink to the output with none extra processing. The Desk API is used for information processing and transformations, solely changing to the DataStream API on the last output stage.

Managed Service for Apache Flink helps Flink APIs. It may well change between the Desk API and the DataStream API.
A MapFunction can convert the Row kind of the Desk API right into a DataStream of GenericRecord. The MapFunction maps Flink’s Row information kind into GenericRecord sorts by iterating over the Avro schema fields and constructing the GenericRecord from the Flink Row kind, casting the Row fields into the proper information kind based on the Avro schema. This conversion is required to beat the avro-confluent library limitation with Flink SQL.

The next diagram and illustrates this workflow.

Flink Table and DataStream APIs

The next code is an instance question:

// SQL Question for filtering
Desk queryResults = tableEnv.sqlQuery(
       "SELECT * FROM InputTable");
// 1. Convert question outcomes from Desk API to a DataStream and use DataStream API to sink question outcomes to Kafka subject
DataStream rowStream = tableEnv.toDataStream(queryResults);
// Fetch the schema string from the schema registry
String schemaString = fetchSchemaString(schemaRegistryURL, schemaSubjectName);
// 2. Convert Row to GenericRecord with express TypeInformation, utilizing customized AvroMapper
TypeInformation typeInfo = new GenericRecordAvroTypeInfo(avroSchema);
DataStream genericRecordStream = rowStream
       .map(new AvroMapper(schemaString))
       .returns(typeInfo); // Explicitly set TypeInformation
// 3. Outline Kafka sink utilizing ConfluentRegistryAvroSerializationSchema
KafkaSink kafkaSink = KafkaSink.builder()
       .setBootstrapServers(bootstrapServers)
       .setRecordSerializer(
               KafkaRecordSerializationSchema.builder()
                       .setTopic(sinkTopic)
                       .setValueSerializationSchema(
                               ConfluentRegistryAvroSerializationSchema.forGeneric(
                                       schemaSubjectName,
                                       avroSchema,
                                       schemaRegistryURL
                               )
                       )
                       .construct()
       )
       .construct();
// Sink to Kafka
genericRecordStream.sinkTo(kafkaSink);

CI/CD With Managed Service for Apache Flink

With Managed Service for Apache Flink, you may run a job by deciding on an Amazon Easy Storage Service (Amazon S3) key containing the appliance JAR. Riskified’s Flink code base was structured as a multi-module repository to help extra use instances in addition to supporting self-service SQL. Every Flink job supply code within the repository is an unbiased Java module. The CI pipeline carried out a sturdy construct and deployment course of consisting of the next steps:

  1. Construct and compile every module.
  2. Run assessments.
  3. Package deal the modules.
  4. Add the artifact to the artifacts bucket twice: one JAR underneath -.jar and the second as -latest.jar, resembling a Docker registry like Amazon Elastic Container Registry (Amazon ECR). Managed Service for Apache Flink jobs makes use of the most recent tag artifact on this case. Nonetheless, a duplicate of outdated artifacts is stored for code rollback causes.

A CD course of follows this course of:

  1. When merged, it lists all jobs for every module utilizing the AWS CLI for Managed Service for Apache Flink.
  2. The applying JAR location is up to date for every software, which triggers a deployment.
  3. When the appliance is in a operating state with no errors, the next software will probably be continued.

To permit secure deployment, this course of is finished regularly for each setting, beginning with the staging setting.

Self-service interface for submitting SQL jobs

Riskified believes an intuitive UI is essential for system adoption and effectivity. Nonetheless, creating a devoted UI for Flink job submission requires a realistic strategy, as a result of it may not be value investing in except there’s already an internet interface for inner improvement operations.

Investing in UI improvement ought to align with the group’s present instruments and workflows. Riskified had an inner net portal for comparable operations, which made the addition of Flink job submission capabilities a pure extension of the self-service infrastructure.

An AWS SDK was put in on the internet server to permit interplay with AWS parts. The shopper receives consumer enter from the UI and interprets it into runtime properties to regulate the habits of the Flink software. The net server then makes use of the CreateApplication API motion to submit the job to Managed Service for Apache Flink.

Though an intuitive UI considerably enhances system adoption, it’s not the one path to accessibility. Alternatively, a well-designed CLI instrument or REST API endpoint can present the identical self-service capabilities.

The next diagram illustrates this workflow.

Flow sequence diagram

Manufacturing expertise: Flink’s implementation upsides

The transition to Flink and Managed Service for Apache Flink proved environment friendly in quite a few features:

  • Schema evolution and information dealing with – Riskified can both periodically fetch up to date schemas or restart functions when schemas evolve. They’ll use present schemas with out self-registration.
  • Useful resource isolation and administration – Managed Service for Apache Flink runs every Flink job as an remoted cluster, lowering useful resource competition between jobs.
  • Useful resource allocation and cost-efficiency – Managed Service for Apache Flink permits minimal useful resource allocation with automated scaling, proving to be extra cost-efficient.
  • Job administration and circulation visibility – Flink gives a cohesive information circulation abstraction by its job and process mannequin. It manages all the information circulation in a single job and distributes the workload evenly over a number of nodes. This unified strategy permits higher visibility into all the information pipeline, simplifying monitoring, troubleshooting, and optimizing advanced streaming workflows.
  • Constructed-in restoration mechanism – Managed Service for Apache Flink routinely creates checkpoints and savepoints that allow stateful Flink functions to recuperate from failures and resume processing with out information loss. With this function, streaming jobs are sturdy and might recuperate safely from errors.
  • Complete observability – Managed Service for Apache Flink exposes CloudWatch metrics that monitor Flink software efficiency and statistics. You too can create alarms based mostly on these metrics. Riskfied determined to make use of the Cloudwatch Prometheus Exporter to export these metrics to Prometheus and construct PrometheusRules to align Flink’s monitoring to the Riskified customary, which makes use of Prometheus and Grafana for monitoring and alerting.

Subsequent steps

Though the preliminary focus was Kafka-to-Kafka streaming queries, Flink’s big selection of sink connectors provides the opportunity of pluggable multi-destination pipelines. This versatility is on Riskfied’s roadmap for future enhancements.

Flink’s DataStream API gives capabilities that reach far past self-serving streaming SQL capabilities, opening new avenues for extra refined fraud detection use instances. Riskified is exploring methods to make use of DataStream APIs to boost ecommerce fraud prevention methods.

Conclusions

On this publish, we shared how Riskified efficiently transitioned from ksqlDB to Managed Service for Apache Flink for its self-serve streaming SQL engine. This transfer addressed key challenges like schema evolution, useful resource isolation, and pipeline administration. Managed Service for Apache Flink provides options akin to together with remoted jobs environments, automated scaling, and built-in monitoring, which proved extra environment friendly and cost-effective. Though Flink SQL limitations with Kafka required workarounds, utilizing Flink’s DataStream API and user-defined features resolved these points. The transition has paved the way in which for future enlargement with multi-targets and superior fraud detection capabilities, solidifying Flink as a sturdy and scalable answer for Riskified’s streaming wants.

If Riskified’s journey has sparked your curiosity in constructing a self-service streaming SQL platform, right here’s find out how to get began:

  • Study extra about Managed Service for Apache Flink:
  • Get hands-on expertise:

Concerning the authors

Gal Krispel is a Information Platform Engineer at Riskified, specializing in streaming applied sciences akin to Apache Kafka and Apache Flink. He focuses on constructing scalable, real-time information pipelines that energy Riskified’s core merchandise. Gal is especially interested by making advanced information architectures accessible and environment friendly throughout the group. His work spans real-time analytics, event-driven design, and the seamless integration of stream processing into large-scale manufacturing techniques.

Sofia ZilbermanSofia Zilberman works as a Senior Streaming Options Architect at AWS, serving to clients design and optimize real-time information pipelines utilizing open-source applied sciences like Apache Flink, Kafka, and Apache Iceberg. With expertise in each streaming and batch information processing, she focuses on making information workflows environment friendly, observable, and high-performing.

Lorenzo NicoraLorenzo Nicora works as Senior Streaming Resolution Architect at AWS, serving to clients throughout EMEA. He has been constructing cloud-centered, data-intensive techniques for over 25 years, working throughout industries each by consultancies and product corporations. He has used open-source applied sciences extensively and contributed to a number of tasks, together with Apache Flink, and is the maintainer of the Flink Prometheus connector.

Related Articles

Latest Articles