🔍
Back to Basics: Building Fan-Out Serverless Architectures Using SNS, SQS and Lambda - YouTube
Channel: Amazon Web Services
[6]
Welcome to another episode
of 'Back to Basics'.
[9]
I am Sid, and today we will talk about
[11]
using fan out patterns in
distributed microservice architectures.
[15]
Let's start by talking about
what this term fan out means.
[18]
Fan-out is a messaging pattern
[20]
where a piece of information
or a message is distributed
[23]
or 'fanned out' to multiple
destinations in parallel.
[26]
The idea is that each of
these destinations can work
[29]
and process this message in parallel.
[32]
One way to implement
the messaging pattern
[35]
is to use publisher/subscriber,
or the pub/sub model.
[39]
In the pub/sub model,
we define a topic
[42]
which is a logical access point
enabling message communication.
[45]
A publisher simply sends
the message to the topic.
[48]
This message is then
immediately fanned out
[52]
or pushed out to all
the subscribers of this topic.
[54]
This message communication is
completely decoupled and asynchronous.
[58]
Each component can operate
and scale individually
[62]
without having any strict dependencies
with any other components.
[65]
The publisher doesn't need to know who is
using the information that it is broadcasting.
[70]
And the subscribers don't need to
know where the message comes from.
[74]
If you were to model this using
stateful point-to-point communication,
[78]
every publisher will have to establish and
track connections with every other subscriber.
[83]
Additionally, failure handling and retry logic
will have to be handled by you in your code.
[89]
Now, imagine doing this at scale
with tens or hundreds of microservices.
[94]
This would make your application
extremely coupled and convoluted,
[98]
and this is exactly what you don't want to do
when building modern applications.
[102]
Now, the best way to build pub/sub
fan out messaging on AWS
[106]
is to use Amazon Simple
Notification Service (Amazon SNS).
[111]
Amazon SNS is a fully managed reliable
and secure pub/sub messaging service.
[116]
You can send any number of messages
at any time to SNS.
[120]
Additionally, failure handling and
retry logic are built into the service.
[125]
So, all you need to do in your code
is send the message to SNS,
[129]
and SNS takes care of all the complexity
involved with sending this message
[134]
at scale to all your subscribers.
[137]
Now, let's look
at an end-to-end architecture
[139]
that leverages this
pub/sub fan out pattern.
[141]
Our set-up consists of a distributed application
that handles order processing.
[146]
First, a customer,
using a web or mobile application,
[150]
places a successful order.
[152]
The web app sends this request to
an API Gateway endpoint in AWS.
[156]
This endpoint is
the “front door” of our application.
[159]
API Gateway handles all the tasks
involved in accepting and processing
[164]
up to hundreds of thousands
of concurrent API requests
[167]
including traffic management,
authorization, access control,
[170]
throttling, and monitoring.
[171]
API Gateway sends the order request
to the first microservice,
[175]
the acknowledgment microservice.
[177]
This microservice leverages
AWS Lambda for compute.
[180]
This microservice does three things.
[182]
First, it verifies the request,
generates the confirmation ID,
[186]
and posts that in a durable database
like DynamoDB.
[190]
Given the order detail has been securely
and durably stored,
[193]
Lambda sends a confirmation message
back to API Gateway.
[197]
Finally, it creates a message which will be sent
downstream to different microservices.
[202]
We have the notification service to send
email and SMS notifications to customers.
[207]
The shipment processing microservice
to initiate the shipping workflows.
[211]
And Data Lake ingest microservice
to push the order details
[214]
in a data lake for analytics
and machine learning.
[217]
One thing I've seen people try to do,
[219]
is to use SQS queues
to send messages between microservices.
[224]
That's a totally valid pattern,
and allows for efficient batch processing
[227]
where the microservices can consume
messages from the queues in batch
[231]
at a pace that works for them.
[233]
But in a fan out pattern,
the same message
[236]
has to be consumed by
multiple microservices simultaneously,
[239]
and that is not possible with queues.
[242]
So, what you can do is build a hyper design,
leveraging both SNS and SQS.
[247]
You can put individual SQS queues
in front of microservices,
[250]
and use SNS to fan out messages
to these queues.
[254]
Once the data reaches the queues,
[255]
AWS Lambda service
automatically polls the queues,
[258]
extracts the messages in batch,
[260]
and invokes Lambda functions
to process them.
[263]
Now, a distributed system not every message
is required to be sent to every microservice.
[268]
Often there are scenarios where you
conditionally want to forward a message
[272]
based on an attribute in the message.
[275]
You can achieve this using
the SNS filter policy feature.
[279]
Let's say you had a different plan
for processing digital orders.
[283]
Using SNS filter policy, you can route
[286]
the right order type
to the right processing pipeline.
[289]
Offloading this message
filtering capability to SNS
[292]
lets you keep
your application code simple,
[294]
while letting Amazon SNS
do all the heavy lifting.
[298]
In this episode, we explored how
the fan out patterns with Amazon SNS
[301]
can enable asynchronous
message communication
[304]
when building distributed
microservice architectures.
[307]
Check out the links below
for more details.
[309]
See you next time.
Most Recent Videos:
You can go back to the homepage right here: Homepage





