AWS Serverless Demonstration
What does the Amazon Web Services A.I. Deep Learning software think is in an image?
This demonstration uses an AWS service called Rekognition to work out what is in an image. Upload an image and then give the service a moment to process it. Click any image below to see what objects and text Amazon thinks it contains.
How does this work?
This serverless demonstration uses two HTML pages hosted on Amazon S3, two Lambda functions, and one SQS queue.
The HTML page you are on uses Javascript to directly upload a file to an AWS S3 bucket. No intermediate server is involved in the upload.
A Lambda function written in Python monitors the bucket for changes. When it sees an image has been added it write the details of that image into an Amazon Dynamo Database and fires off an SQS message into a queue containing the S3 ID of the image.
Another Lambda monitors the SQS queue. When it sees new a message it uses Amazon Rekognition to process the image with the S3 ID that was passed and writes the results into the Dynamo DB record that was created for that image in the previous step.
There is a second HTML page that uses Javascript to get the record out of Dynamo DB for an image when you click on it above.
Cognito roles are used to control what access the Javascript calls have, and IAM policies control the Lambda and SQS internal access.
If you are viewing this page at https://serverless.gmacinternet.com.au then it is being served through a CloudFlare worker task that runs on their edge servers. This page does not actually exist.
Coded by Gerard McDermott, GMAC Internet Solutions.