-5.7 C
New York
Friday, December 5, 2025

Constructing a real-time ICU affected person analytics pipeline with AWS Lambda occasion supply mapping


In hospital intensive care models (ICUs), steady affected person monitoring is crucial. Medical units generate huge quantities of real-time knowledge on very important indicators comparable to coronary heart charge, blood stress, and oxygen saturation. The important thing problem lies in early detection of affected person deterioration via very important signal trending. Healthcare groups should course of 1000’s of knowledge factors every day per affected person to establish regarding patterns, a activity essential for well timed intervention and probably life-saving care.

AWS Lambda occasion supply mapping might help on this situation by robotically polling knowledge streams and triggering features in real-time with out extra infrastructure administration. By utilizing AWS Lambda for real-time processing of sensor knowledge and storing aggregated leads to safe knowledge buildings designed for giant analytic datasets known as Iceberg tables in Amazon Easy Storage Service (Amazon S3) buckets, medical groups can obtain each speedy alerting capabilities and acquire long-term analytical insights, enhancing their means to offer well timed and efficient care.

On this submit, we exhibit construct a serverless structure that processes real-time ICU affected person monitoring knowledge utilizing Lambda occasion supply mapping for speedy alert era and knowledge aggregation, adopted by persistent storage in Amazon S3 with an Iceberg catalog for complete healthcare analytics. The answer demonstrates deal with high-frequency very important signal knowledge, implement crucial threshold monitoring, and create a scalable analytics platform that may develop along with your healthcare group’s wants and assist monitor sensor alert fatigue within the ICU.

Structure

The next structure diagram illustrates a real-time ICU affected person analytics system.

On this structure, real-time affected person monitoring knowledge from hospital ICU sensors is ingested into AWS IoT Core, which then streams the info into Amazon Kinesis Information Streams. Two Lambda features eat this streaming knowledge concurrently for various functions, each utilizing Lambda occasion supply mapping integration with Kinesis Information Streams. The primary Lambda perform makes use of the filtering characteristic of occasion supply mapping to detect crucial well being occasions the place SpO2(blood oxygen saturation) ranges fall under 90%, instantly triggering notifications to caregivers via Amazon Easy Notification Service (Amazon SNS) for speedy response. The second Lambda perform employs the tumbling window characteristic of occasion supply mapping to mixture sensor knowledge over 10-minute time intervals. This aggregated knowledge is then systematically saved in S3 buckets in Apache Iceberg format for historic evaluation and reporting. The whole pipeline operates in a serverless method, offering scalable, real-time processing of crucial healthcare knowledge whereas sustaining each speedy alerting capabilities and long-term knowledge storage for analytics.

Amazon S3 knowledge, with its help for Apache Iceberg desk format, permits healthcare organizations to effectively retailer and question giant volumes of time-series affected person knowledge. This answer permits for complicated analytical queries throughout historic affected person knowledge whereas sustaining excessive efficiency and price effectivity.

Stipulations

To implement the answer supplied on this submit, it’s best to have the next:

  • An lively AWS account
  • IAM permissions to deploy CloudFormation templates and provision AWS assets
  • Python put in in your machine to run the ICU affected person sensor knowledge simulator code

Deploy a real-time ICU affected person analytics pipeline utilizing CloudFormation

You employ AWS CloudFormation templates to create the assets for a real-time knowledge analytics pipeline.

  1. To get began, Sign up to the console as Account consumer and choose the suitable Area.
  2. Obtain and launch CloudFormation template  the place you wish to host the Lambda features.
  3. Select Subsequent.
  4. On the Specify stack particulars web page, enter a Stack title (for instance, IoTHealthMonitoring).
  5. For Parameters, enter the next:
    1. IoTTopic: Enter the MQTT matter on your IoT units (for instance, icu/sensors).
    2. EmailAddress: Enter an e-mail handle for receiving notifications.
  6. Anticipate the stack creation to finish. This course of may take 5-10 minutes.
  7. After the CloudFormation stack completes, it creates following assets:
    1. An AWS IoT Core rule to seize knowledge from the desired IoTTopic matter and routes it to Kinesis knowledge stream.
    2. A Kinesis knowledge stream for ingesting IoT sensor knowledge.
    3. Two Lambda features:
      • FilterSensorData: Screens crucial well being metrics and sends alerts.
      • AggregateSensorData: Aggregates sensor knowledge in 10 minutes window.
    4. An Amazon DynamoDB desk (NotificationTimestamps) to retailer notification timestamps for charge limiting alerts.
    5. An Amazon SNS matter and subscription to ship e-mail notifications for crucial affected person situations.
    6. An Amazon Information Firehose supply stream to ship processed knowledge to Amazon S3 utilizing Iceberg format.
    7. Amazon S3 buckets to retailer sensor knowledge.
    8. Amazon Athena and AWS Glue assets for the database and an Iceberg desk for querying aggregated knowledge.
    9. AWS Id and Entry Administration (IAM) roles and insurance policies to help required permissions for Amazon IoT guidelines, Lambda features, and Information Firehose streams.
    10. Amazon CloudWatch log teams to report for Kinesis Firehose exercise and Lambda features.

Answer walkthrough

Now that you simply’ve deployed the answer, let’s evaluation a purposeful walkthrough. First, simulate affected person very important indicators knowledge and ship it to AWS IoT Core utilizing the next Python code in your native machine. To run this code efficiently, guarantee you might have the required IAM permissions to publish messages to the IoT matter within the AWS account the place the answer is deployed.

import boto3
import json
import random
import time
# AWS IoT Information shopper
iot_data_client = boto3.shopper(
    'iot-data',
    region_name="us-west-2"
)
# IOT Subject to publish
matter="icu/sensors"
# Mounted set of affected person IDs
patient_ids = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print("Infinite sensor knowledge simulation...")
attempt:
    whereas True:
        for patient_id in patient_ids:
            # Generate sensor knowledge
            message = {
                "patient_id": patient_id,
                "timestamp": int(time.time()),
                "spo2": random.randint(91, 99),
                "heart_rate": random.randint(60, 100),
                "temperature_f": spherical(random.uniform(97.0, 100.0), 1)
            }
            # Publish to matter
            response = iot_data_client.publish(
                matter=matter,
                qos=1,
                payload=json.dumps(message)
            )
            print(f"Revealed: {message}")
        # Wait 30 seconds earlier than subsequent spherical
        print("Sleeping for 30 seconds...n")
        time.sleep(30)
besides KeyboardInterrupt:
    print("nSimulation stopped by consumer.")

The next is the format of a pattern ICU sensor message produced by the simulator.

{
    "patient_id": 1,
    "timestamp": 1683000000,
    "spo2": 85,
    "heart_rate": 75,
    "temperature_f": 98.6
}

Information is revealed to the icu/sensors IoT matter each 30 seconds for 10 completely different sufferers, making a steady stream of ICU affected person monitoring knowledge. Messages revealed to AWS IoT Core are handed to Kinesis Information Streams utilizing the next message routing rule deployed by our answer.

Two Lambda features eat knowledge from Information Streams concurrently, each utilizing the Lambda occasion supply mapping integration with Kinesis Information Streams.

Occasion supply mapping

Lambda occasion supply mapping robotically triggers Lambda features in response to knowledge adjustments from supported occasion sources like Amazon DynamoDB Streams, Amazon Kinesis Information Streams, Amazon Easy Queue Service (Amazon SQS), Amazon MQ, and Amazon Managed Streaming for Apache Kafka. This serverless integration works by having Lambda ballot these sources for brand new data, that are then processed in configurable batch sizes starting from 1 to 10,000 data. When new knowledge is detected, Lambda robotically invokes the perform synchronously, dealing with the scaling robotically based mostly on the workload. The service helps at-least-once supply and offers strong error dealing with via retry insurance policies and dead-letter queues for failed occasions. Occasion supply mappings could be fine-tuned via numerous parameters comparable to batch home windows, most report age, and retry makes an attempt, making them extremely adaptable to completely different use circumstances. This characteristic is especially useful in event-driven architectures, in order that prospects can deal with enterprise logic whereas AWS manages the complexities of occasion processing, scaling, and reliability.

Occasion supply mapping makes use of tumbling home windows and filtering to course of and analyze knowledge.

Tumbling home windows

Tumbling home windows in Lambda occasion processing allow knowledge aggregation in mounted, non-overlapping time intervals, the place every occasion belongs to precisely one window. That is superb for time-based analytics and periodic reporting. When mixed with occasion supply mapping, this strategy permits environment friendly batch processing of occasions inside outlined time intervals (for instance, 10-minute home windows), enabling calculations comparable to common very important indicators or cumulative fluid consumption and output whereas optimizing perform invocations and useful resource utilization.

Once you configure an occasion supply mapping between Kinesis Information Streams and a Lambda perform, use the Tumbling Window Length setting, which seems within the set off configuration within the Lambda console. The answer you deployed utilizing the CloudFormation template consists of the AggregateSensorData Lambda perform, which makes use of a 10-minute tumbling window configuration. Relying on the amount of messages flowing via the Amazon Kinesis stream, the AggregateSensorData perform could be invoked a number of occasions for every 10-minute window, sequentially, with the next attributes within the occasion provided to the perform.

  • Window begin and finish: The start and ending timestamps for the present tumbling window.
  • State: An object containing the state returned from the earlier window, which is initially empty. The state object can include as much as 1 MB of knowledge.
  • isFinalInvokeForWindow: Signifies if that is the final invocation for the tumbling window. This solely happens as soon as per window interval.
  • isWindowTerminatedEarly: A window ends early provided that the state exceeds the utmost allowed measurement of 1 MB.

In a tumbling window, there’s a sequence of Lambda invocations within the following sample:

AggregateSensorData Lambda code snippet:

def handler(occasion, context):
    
    state_across_window = occasion['state']
    # Iterate via every report and decode the base64 knowledge
    for report in occasion['Records']:
        encoded_data = report['kinesis']['data']
        partition_key = report['kinesis']['partitionKey']
        decoded_bytes = base64.b64decode(encoded_data)
        decoded_str = decoded_bytes.decode('utf-8')
        decoded_json = json.masses(decoded_str)
        # create partition_key attribute if it don't exists in state
        if partition_key not in state_across_window:
            state_across_window[partition_key] = {"min_spo2": decoded_json['spo2'], "max_spo2": decoded_json['spo2'], "avg_spo2": decoded_json['spo2'], "sum_spo2": decoded_json['spo2'], "min_heart_rate": decoded_json['heart_rate'], "max_heart_rate": decoded_json['heart_rate'], "avg_heart_rate": decoded_json['heart_rate'], "sum_heart_rate": decoded_json['heart_rate'], "min_temperature_f": decoded_json['temperature_f'], "max_temperature_f": decoded_json['temperature_f'], "avg_temperature_f": decoded_json['temperature_f'], "sum_temperature_f": decoded_json['temperature_f'], "record_count": 1}
        else:
            min_spo2 = state_across_window[partition_key]['min_spo2'] if state_across_window[partition_key]['min_spo2'] < decoded_json['spo2'] else decoded_json['spo2']
            max_spo2 = state_across_window[partition_key]['max_spo2'] if state_across_window[partition_key]['max_spo2'] > decoded_json['spo2'] else decoded_json['spo2']
            sum_spo2 = state_across_window[partition_key]['sum_spo2'] + decoded_json['spo2']
            min_heart_rate = state_across_window[partition_key]['min_heart_rate'] if state_across_window[partition_key]['min_heart_rate'] < decoded_json['heart_rate'] else decoded_json['heart_rate']
            max_heart_rate = state_across_window[partition_key]['max_heart_rate'] if state_across_window[partition_key]['max_heart_rate'] > decoded_json['heart_rate'] else decoded_json['heart_rate']
            sum_heart_rate = state_across_window[partition_key]['sum_heart_rate'] + decoded_json['heart_rate']
            
            min_temperature_f = state_across_window[partition_key]['min_temperature_f'] if state_across_window[partition_key]['min_temperature_f'] < decoded_json['temperature_f'] else decoded_json['temperature_f']
            max_temperature_f = state_across_window[partition_key]['max_temperature_f'] if state_across_window[partition_key]['max_temperature_f'] > decoded_json['temperature_f'] else decoded_json['temperature_f']
            sum_temperature_f = state_across_window[partition_key]['sum_temperature_f'] + decoded_json['temperature_f']
            
            record_count = state_across_window[partition_key]['record_count'] + 1
            avg_spo2 = sum_spo2/record_count
            avg_heart_rate = sum_heart_rate/record_count
            avg_temperature_f = sum_temperature_f/record_count
            
            state_across_window[partition_key] = {"min_spo2": min_spo2, "max_spo2": max_spo2, "avg_spo2": avg_spo2, "sum_spo2": sum_spo2, "min_heart_rate": min_heart_rate, "max_heart_rate": max_heart_rate, "avg_heart_rate": avg_heart_rate, "sum_heart_rate": sum_heart_rate, "min_temperature_f": min_temperature_f, "max_temperature_f": max_temperature_f, "avg_temperature_f": avg_temperature_f, "sum_temperature_f": sum_temperature_f, "record_count": record_count}
        
    # Decide if the window is remaining (window finish)
    is_final_window = occasion.get('isFinalInvokeForWindow', False)
    # Decide if the window is terminated (window ended early)
    is_terminated_window = occasion.get('isWindowTerminatedEarly', False)
    window_start = occasion['window']['start']
    window_end = occasion['window']['end']
    if is_final_window or is_terminated_window:
        firehose_client = boto3.shopper('firehose')
        firehose_stream = os.environ['FIREHOSE_STREAM_NAME']
        for key, worth in state_across_window.gadgets():
            worth['patient_id'] = key
            worth['window_start'] = window_start
            worth['window_end'] = window_end
            
            firehose_client.put_record(
                DeliveryStreamName= firehose_stream,
                Document={'Information': json.dumps(worth) }
            )
        
        return {
            "state": {},
            "batchItemFailures": []
        }
    else:
        print(f"interim name for window: ws: {window_start} we: {window_end}")
        return {
            "state": state_across_window,
            "batchItemFailures": []
        }

  • The primary invocation incorporates an empty state object within the occasion. The perform returns a state object containing customized attributes which might be particular to the customized logic within the aggregation.
  • The second invocation incorporates the state object supplied by the primary Lambda invocation. This perform returns an up to date state object with new aggregated values. Subsequent invocations comply with this identical sequence. Following is a pattern of the aggregated state, which could be provided to subsequent Lambda invocations inside the identical 10-minute tumbling window.
{
    "min_spo2": 88,
    "max_spo2": 90,
    "avg_spo2": 89.2,
    "sum_spo2": 625,
    "min_heart_rate": 21,
    "max_heart_rate": 22,
    "avg_heart_rate": 21.1,
    "sum_heart_rate": 148,
    "min_temperature_f": 90,
    "max_temperature_f": 91,
    "avg_temperature_f": 90.1,
    "sum_temperature_f": 631,
    "record_count": 7,
    "patient_id": "44",
    "window_start": "2025-05-29T20:51:00Z",
    "window_end": "2025-05-29T20:52:00Z"
}

  • The ultimate invocation within the tumbling window has the isFinalInvokeForWindow flag set to the true. This incorporates the state returned by the newest Lambda invocation. This invocation is answerable for passing aggregated state messages to the Information Firehose stream, which delivers knowledge to the Amazon S3 bucket utilizing Iceberg knowledge format.
  • After the aggregated knowledge is shipped to Amazon S3, you may question the info utilizing Athena.
Question: SELECT * FROM "cfdb_<>"."table_<>" 
        
       

Pattern results of the previous Athena question:

Occasion supply mapping with filtering

Lambda occasion supply mapping with filtering optimizes knowledge processing from sources like Amazon Kinesis by making use of JSON sample filtering earlier than perform invocation. That is demonstrated within the ICU affected person monitoring answer, the place the system filters for SpO2 readings from Kinesis Information Streams which might be under 90%. As an alternative of processing all incoming knowledge, the filtering functionality is used to selectively processes solely crucial readings, considerably decreasing prices and processing overhead. The answer makes use of DynamoDB for classy state administration, monitoring low SpO2 occasions via a schema combining PatientID and timestamp-based keys inside outlined monitoring home windows.

This state-aware implementation balances medical urgency with operational effectivity by sending speedy Amazon SNS notifications when crucial situations are first detected whereas implementing a 15-minute alert suppression window to forestall alert fatigue amongst healthcare suppliers. By sustaining state throughout a number of Lambda invocations, the system helps guarantee speedy response to probably life-threatening conditions whereas minimizing pointless notifications for a similar affected person situation. The mixing of Lambda’occasion filtering, DynamoDB state administration, and dependable alert supply supplied by Amazon SNS creates a strong, scalable healthcare monitoring answer that exemplifies how AWS companies could be strategically mixed to handle complicated necessities whereas balancing technical effectivity with medical effectiveness.

Filter sensor knowledge Lambda code snippet:

sns_client = boto3.shopper('sns')
dynamodb = boto3.useful resource('dynamodb')
table_name = os.environ['DYNAMODB_TABLE']
sns_topic_arn = os.environ['SNS_TOPIC_ARN']
desk = dynamodb.Desk(table_name)
FIFTEEN_MINUTES = 15 * 60  # quarter-hour in seconds
def handler(occasion, context):
    for report in occasion['Records']:
        print(f"Aggregated occasion: {report}")
        encoded_data = report['kinesis']['data']
        partition_key = report['kinesis']['partitionKey']
        decoded_bytes = base64.b64decode(encoded_data)
        decoded_str = decoded_bytes.decode('utf-8')
        # Examine final notification timestamp from DynamoDB
        attempt:
            response = desk.get_item(Key={'partition_key': partition_key})
            merchandise = response.get('Merchandise')
            now = int(time.time())
            if merchandise:
                last_sent = merchandise.get('timestamp', 0)
                if now - last_sent < FIFTEEN_MINUTES:
                    print(f"Notification for {partition_key} skipped (despatched lately)")
                    proceed
            # Ship SNS Notification
            sns_response = sns_client.publish(
                TopicArn=sns_topic_arn,
                Message=f"Affected person SpO2 under 90 share occasion info: {decoded_str}",
                Topic=f"Low SpO2 detected for affected person ID {partition_key}"
            )
            print("Message despatched to SNS! MessageId:", sns_response['MessageId'])
            # Replace DynamoDB with present timestamp and TTL
            desk.put_item(Merchandise={
                'partition_key': partition_key,
                'timestamp': now,
                'ttl': now + FIFTEEN_MINUTES + 60  # Add additional buffer to TTL
            })
        besides Exception as e:
            print("Error processing occasion:", e)
            return {
                'statusCode': 500,
                'physique': json.dumps('Error processing occasion')
            }
    return {
        'statusCode': 200,
        'physique': {}
    }

To generate an alert notification via the deployed answer, replace the previous simulator code by setting the SpO2 worth to lower than 90 and run it once more. Inside 1 minute, it's best to obtain an alert notification on the e-mail handle you supplied throughout stack creation. The next picture is an instance of an alert notification generated by the deployed answer.

Clear up

To keep away from ongoing prices after finishing this tutorial, delete the CloudFormation stack that you simply deployed earlier on this submit. This can take away a lot of the AWS assets created for this answer. You may must manually delete objects created in Amazon S3, as a result of CloudFormation gained’t take away non-empty buckets throughout stack deletion.

Conclusion

As demonstrated on this submit, you may construct a serverless real-time analytics pipeline for healthcare monitoring through the use of AWS IoT Core, Amazon S3 buckets with iceberg format, and Amazon Kinesis Information Streams integration with AWS Lambda occasion supply mapping. This architectural strategy eliminates the necessity for complicated code whereas enabling speedy crucial affected person care alerts and knowledge aggregation for evaluation utilizing Lambda. The answer is especially useful for healthcare organizations seeking to modernize their affected person monitoring programs with real-time capabilities. The structure could be prolonged to deal with numerous medical units and sensor knowledge streams, making it adaptable for various healthcare monitoring situations. This submit presents one implementation strategy, and organizations adopting this answer ought to make sure the structure and code meets their particular software efficiency, safety, privateness, and regulatory compliance wants.

If this submit helps you or evokes you to unravel an issue, we'd love to listen to about it!


In regards to the authors

Nihar Sheth

Nihar Sheth

Nihar is a Senior Product Supervisor on the AWS Lambda workforce at Amazon Internet Providers. He's obsessed with creating intuitive product experiences that remedy complicated buyer issues and allow prospects to attain their enterprise targets.

Pratik Patel

Pratik Patel

Pratik is Sr Technical Account Supervisor and streaming analytics specialist. He works with AWS prospects and offers ongoing help and technical steering to assist plan and construct options utilizing finest practices and proactively helps in conserving prospects’ AWS environments operationally wholesome.

Priyanka Chaudhary

Priyanka Chaudhary

Priyanka is Senior Options Architect at AWS. She is specialised in knowledge lake and analytics companies and helps many shoppers on this space. As a Options Architect, she performs a vital function in guiding strategic prospects via their cloud journey by designing scalable and safe cloud options. Outdoors of labor, she loves spending time with family and friends, watching films, and touring.

Related Articles

Latest Articles