How to resolve overlapping prefixes or suffixes issues for s3 event notification configurations. ?
I had an s3 notification configured at the root of mybucket to get an event notification for all files landing in mybucket for a service. Later I came across a scenario where another service needed events for specific file types and prefixes. Since I already had an event notification created at the root of mybucket, I could not create an event notification for other prefixes or file types and I got the following error:
“Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.”
The error can occur when you’re doing one of the following:
- Recreating an S3 event notification that you recently deleted.
- Creating S3 event notifications for multiple overlapping events using overlapping prefixes or suffixes.
Note: Amazon S3 event notification configurations allow overlapping events with non-overlapping prefixes or suffixes. The configurations also allow non-overlapping events with overlapping prefixes or suffixes.
Redesign for your use case
If you can’t reconfigure your S3 event notification to avoid the overlap, try redesigning your architecture to work around it.
Resolve “Configuration is ambiguously defined” error for Lambda
The Configuration is ambiguously defined error occurs when a notification’s event information and its prefix or suffix…
Workarounds mentioned in the AWS article
Option 1: I will have to manage a routing lambda function and update the lambda code for every new service that wants to consume the events.
Option 2: I can use the fanout method to send event notifications to all services and let them design the subscribed functions with logic to decide whether to process the events they receive.
Option 3: Enable object-level logging of Amazon S3 actions to AWS CloudTrail, then use an Amazon CloudWatch Events rule to trigger your Lambda function based on the Amazon S3 event pattern but It’s clear that the delivery latency with CloudTrail is higher than with S3 Notifications.
I chose to leverage Amazon SNS subscription filter policies but the challenge was SNS subscription filter policies require message attributes. When s3 publishes an event notification It does not have any message attributes.
At present there is no option/functionality available to handle this scenario, Therefore I decided to build a solution to attach object/file metadata as message attribute and publish it to SNS Topic. So If any service looking for a particular category of events they could subscribe to SNS Topic with the appropriate subscription filter policy.
Note: if a service needs all events they can leave the subscription filter policy to default.
1. Create SNS Topic
2. Create Lambda function with access policy which allows s3 bucket to send event notification and use SNS Topic ARN. In the lambda function I am adding the following message attributes file name, file prefix, file type and source bucket but if you want you can add more as per your use case and use them in the SNS subscription filter policy.
from botocore.exceptions import ClientError
import urllib, ossns_client = boto3.client('sns')def lambda_handler(event, context):
source_bucket = event['Records']['s3']['bucket']['name']
key = event['Records']['s3']['object']['key']
3. Configure s3 event notification at the root of the bucket to send an event notification to Lambda created in step 2.
4. Update SNS Topic access policy to allow Lambda function from step 2 to publish.
5. Add subscriptions to SNS Topic with an SNS subscription filter policy. Please reference the link below to see an example subscription filter policy.
Amazon SNS subscription filter policies
A subscription filter policy allows you to specify attribute names and assign a list of values to each attribute name…
I hope you enjoyed the brief tour of my s3 notification configuration pattern. If you have any suggestions or something I’ve missed then please comment below.