Create a Markdown Blog with Astrojs, Turso, and AWS

Build a server-side generated markdown blog with thumbnails. Technology used: S3, Lambda, CloudFront, AWS Cloud Development Kit(CDK), Astrojs, Svelte..

Create a Markdown Blog with Astrojs, Turso, and AWS

December 6, 2024

App Architecture

Here are the main features of the app:

  • Admin dashboard
    • Post new blogs
    • Edit blog posts
    • Markdown Editor
    • Publish blogs to public view
  • Upload blog image
  • Admin Authentication
  • Static Generation

Our architecture requires three key components: an object storage service, a persistent database, and serverless compute functions.

For the database I decided to use Turso, a cloud database built on libSQL, a fork of SQLite. We could use AWS Relational Database Service(RDS), but it would be expensive for a personal site. Alternatively you could use DynamoDB, a NoSQL database on AWS, which should cost next to nothing for a simple blog. For file storage we will use S3 alongside Cloudfront(cdn). This will serve our blog thumbnail images. To optimize and resize the images we will utilize a Lambda function with a S3 event trigger.

To manage and deploy our infrastructure we will leverage AWS Cloud Development Kit(CDK).

The Astro.js app will be hosted with Vercel. Any other platform like Netlify or Cloudflare will work, however, if you are using a VPS you will have to setup a webhook to rebuild your app. Using these cloud providers and services our hosting cost will be dirt cheap, especially if your able to use Turso's and Vercel's free tiers.

To start, create an Astrojs app with npm create astro@latest, then import your project on Vercel.

Setup Turso and Drizzle

Lets setup our database inside of Astrojs app. First create a database in Turso, then grab the auth token and database url, and then add them to a .env file. In addition to drizzle-orm we will also utilize Drizzlekit, a CLI tool for managing SQL database migrations.

TURSO_CONNECTION_URL=
TURSO_AUTH_TOKEN=

Next, Install drizzle-orm, drizzlekit, @libsql/client

Create a database instance

src/libs/db/index.ts

import { drizzle } from "drizzle-orm/libsql";

export const db = drizzle({
  connection: {
    url: import.meta.env.TURSO_CONNECTION_URL as string,
    authToken: import.meta.env.TURSO_AUTH_TOKEN as string,
  },
});

Now lets create a Drizzlekit config.

drizzle.config.ts

import { config } from "dotenv";
import { defineConfig } from "drizzle-kit";

config({ path: ".env" });

export default defineConfig({
  schema: "./src/libs/models/index.ts",
  out: "./migrations",
  dialect: "turso",
  dbCredentials: {
    url: process.env.TURSO_CONNECTION_URL!,
    authToken: process.env.TURSO_AUTH_TOKEN!,
  },
});

Now lets create a table for our blogs.

src/libs/db/blog/table.ts

import {
  integer,
  sqliteTable,
  text,
  uniqueIndex,
} from "drizzle-orm/sqlite-core";

export const blogTable = sqliteTable(
  "blogs",
  {
    id: integer("id").primaryKey({ autoIncrement: true }),
    title: text("title").notNull(),
    description: text("description").notNull(),
    blogContent: text("blogContent").notNull(),
    slug: text("slug").unique().notNull(),
    imageKey: text("imageKey").notNull(),
    published: integer({ mode: "boolean" }).default(false),
    createdAt: text("created_at").notNull(),
    updatedAt: text("updated_at").notNull(),
  },
  (table) => {
    return {
      slugIndex: uniqueIndex("slug_idx").on(table.slug),
    };
  },
);

export type InsertBlog = typeof blogTable.$inferInsert;
export type SelectBlog = typeof blogTable.$inferSelect;

This file will be used by drizzle kit to create our migration files.

We'll add a index for slug, because slug column will be used a unique identifier.

  (table) => {
    return {
      slugIndex: uniqueIndex("slug_idx").on(table.slug),
    };

These are types created by our table which can be used to for selecting and inputting into our table.

export type InsertBlog = typeof blogTable.$inferInsert;

export type SelectBlog = typeof blogTable.$inferSelect;

Nows lets create migration files and then push our changes to our database.

pnpm drizzle-kit generate

pnpm drizzle-kit migrate

pnpm drizzle-kit push

Setup Authentication

We'll be using Github OAuth with Auth.js which has an Astro adapter. Use this guide to integrate Auth.js. You can use any OAuth provider, I have decided to use GitHub.

After thats done, we'll make a middleware to protect our /admin routes

src/middleware.ts

import type { MiddlewareHandler } from "astro";
import { getSession } from "auth-astro/server";
export const onRequest: MiddlewareHandler = async (
  context,
  next,
) => {
  const paths = context.url.pathname.split("/");
  const rootPath = paths[1];

  if (rootPath.toLowerCase() === "admin") {
    const session = await getSession(context.request);
    // check if session exists
    const email = session?.user?.email;
    if (email) {
      if (email === import.meta.env.EMAIL) {
        return next();
      } else {
        // Imposter is trying to login
        // Redirect to homepage
        return new Response(null, {
          status: 303,
          headers: { Location: "/" },
        });
      }
    }
    // Session not found
    return new Response(null, {
      status: 303,
      headers: { Location: "/login" },
    });
  }
  if (rootPath.toLowerCase() === "login") {
    const session = await getSession(context.request);
    if (session?.user?.email === import.meta.env.EMAIL) {
      return new Response(null, {
        status: 303,
        headers: { Location: "/admin" },
      });
    }
    return next();
  }
  return next();
};

Anything inside admin/* will be redirected to /login if the user isn't logged in.

The OAuth provider will give us the user's email in the session, to ensure the logged in user is the admin, add an email environment variable

EMAIL={your_github_email}

which will be checked here: session?.user?.email === import.meta.env.EMAIL

Vercel Webhook

Since most of our blog pages will be server-side rendered, meaning the html will be created once and stored on cdns, we'll need to rebuild our app if content changes. To create a project webhook, go to your project page on Vercel. Then go to Settings > Git > Deploy Hooks then create a webhook with any name, and choose the branch main. Copy the webhook then set a enviroment variable

.env

VERCEL_DEPLOY_HOOK=

After create a utility function to deploy to trigger a redeploy with our webhook.

src/libs/utils.ts

export async function deployVercel() {
  try {
    const resp = await fetch(
      import.meta.env.VERCEL_DEPLOY_HOOK,
      {
        method: "POST",
      },
    ).then((r) => r.json());
    return resp;
  } catch (e) {
    console.log(e);
    throw new Error("Failed to fetch deploy webhook");
  }
}

Setup AWS IAM Roles

Login to your AWS console and go to IAM. Click policies on the left side navigation.

a

Then click create policy, and name it any name that makes sense. Next you will see this screen, click the json tab.

Create policy Image
Then add this json to the editor and click save, after click next then create the policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/cdk-*"
            ]
        }
    ]
} 

This policy will be used for our CDK deployment user.

Create User

Now lets create a user for our CDK deployments. Navigate to Access Management > Users, then Click create User. Enter a username for the role, then click next. For permissions options click 'Attach policies directly', then search for the policy we just created, then check the box. Click next and create the user.

Create policy Image

The second user will be for our S3 operations. The user will need to upload an image to a bucket, and also delete images from the images bucket. So create another policy and add input this json

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "DeleteObject",
			"Effect": "Allow",
			"Action": "s3:DeleteObject",
			"Resource": "arn:aws:s3:::{your-images-bucket}/*"
		},
		{
			"Sid": "PutObject",
			"Effect": "Allow",
			"Action": "s3:PutObject",
			"Resource": "arn:aws:s3:::{your-upload-bucket}/*"
		}
	]
}

In {your-upload-bucket} and {your-images-bucket} replace these with buckets names. This will allow the user to upload any file to one bucket and delete any file in the other bucket. In S3, bucket names can be used as Unique Identifiers, make ensure they are unique. Now create another user, the same way you created the CDK deployment user, and add the new S3 policy we just created.

Infrastructure as Code

Setup CDK

Follow this guide to setup the CDK.

Run this command to generate a project cdk init app --language typescript

You should have files bin/{your_project_name}.ts and lib/{your_project_name}-stack.ts.

In .env file:

DEST_BUCKET_NAME={dest-bucket}
UPLOAD_BUCKET_NAME={upload-bucket}

Ensure DEST_BUCKET_NAME and UPLOAD_BUCKET_NAME match the bucket names we set in our S3 policy.

Lets create file called config.ts in /lib.

lib/config.ts

const path = require("path");
import * as dotenv from "dotenv";
dotenv.config({ path: path.resolve(__dirname, "../.env") });

export type ConfigProps = {
  DEST_BUCKET_NAME: string;
  UPLOAD_BUCKET_NAME: string;
};

export const getConfig = (): ConfigProps => ({
  DEST_BUCKET_NAME: process.env.DEST_BUCKET_NAME as string,
  UPLOAD_BUCKET_NAME: process.env.UPLOAD_BUCKET_NAME as string,
});

This will retrieve our environment variables from .env

Now lets create our Stack.

lib/portfolio-iac-stack.ts

import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import { Distribution } from "aws-cdk-lib/aws-cloudfront";
import * as s3 from "aws-cdk-lib/aws-s3";
import { NodejsFunction } from "aws-cdk-lib/aws-lambda-nodejs";
import * as lambda from "aws-cdk-lib/aws-lambda";
import * as eventSources from "aws-cdk-lib/aws-lambda-event-sources";
import { S3BucketOrigin } from "aws-cdk-lib/aws-cloudfront-origins";
import { getConfig } from "./config";

export class BlogIacStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    const config = getConfig();
    //Bucket where optimized images are stored
    const destinationBucket = new s3.Bucket(this, "DestinationBucket", {
      bucketName: config.DEST_BUCKET_NAME,
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
      accessControl: s3.BucketAccessControl.PRIVATE,
      enforceSSL: true,
      autoDeleteObjects: true,
      removalPolicy: cdk.RemovalPolicy.DESTROY,
    });
    //Where images are uploaded too
    const uploadBucket = new s3.Bucket(this, "uploadBucket", {
      bucketName: config.UPLOAD_BUCKET_NAME,
      autoDeleteObjects: true,
      removalPolicy: cdk.RemovalPolicy.DESTROY,
    });

    const resizeImageLambda = new NodejsFunction(this, "ResizeImages", {
      handler: "handler",
      entry: "./src/index.ts",
      runtime: lambda.Runtime.NODEJS_18_X,
      architecture: lambda.Architecture.X86_64,
      timeout: cdk.Duration.seconds(12),
      // Sharp is os and architecture dependent
      // Ensure correct version is installed
      bundling: {
        nodeModules: ["sharp"],
        forceDockerBundling: true,
      },
      environment: {
        DEST_BUCKET: config.DEST_BUCKET_NAME,
      },
    });
    uploadBucket.grantRead(resizeImageLambda);
    uploadBucket.grantDelete(resizeImageLambda);
    destinationBucket.grantWrite(resizeImageLambda);

    new Distribution(this, "BlogImageCache", {
      defaultBehavior: {
        origin: S3BucketOrigin.withOriginAccessControl(destinationBucket),
      },
    });

    const s3PutEventSource = new eventSources.S3EventSource(uploadBucket, {
      events: [s3.EventType.OBJECT_CREATED],
    });

    resizeImageLambda.addEventSource(s3PutEventSource);
  }
}

destinationBucket is where the resized and optimized images will live. uploadBucket is the bucket our admin will be uploading blog images too, where they can be optimized by resizeLambda.

This setups a CDN for all files in destinationBucket.

    new Distribution(this, "BlogImageCache", {
      defaultBehavior: {
        origin: S3BucketOrigin.withOriginAccessControl(destinationBucket),
      },
    });

This creates an event and sets our resizeImageLambda as a consumer.

const s3PutEventSource = new eventSources.S3EventSource(uploadBucket, {
      events: [s3.EventType.OBJECT_CREATED],
    });

    resizeImageLambda.addEventSource(s3PutEventSource);

This param inside our resizeLambda, ensures that our sharp dependency, an image processing libray, gets installed using the correct architecture and OS(operating system), since sharp is OS and Architecture dependent.

      bundling: {
        nodeModules: ["sharp"],
        forceDockerBundling: true,
      },

As you can see our lambda has an entry param which points to ./src/index.ts, this is where our lambda code will reside.

In src/index.ts

import type { S3Event, Context, Callback } from "aws-lambda";
import {
  S3Client,
  GetObjectCommand,
  PutObjectCommand,
  DeleteObjectCommand,
} from "@aws-sdk/client-s3";
import sharp from "sharp";

const s3Client = new S3Client({});

export const handler = async (
  event: S3Event,
  _: Context,
  callback: Callback,
): Promise<void> => {
  console.log("S3 Event received:", JSON.stringify(event));
  const destBucket = process.env.DEST_BUCKET;
  try {
    // Extract bucket name and object key from the event
    const record = event.Records[0];
    const bucketName = record.s3.bucket.name;
    // Grab file name
    const objectKey = decodeURIComponent(
      record.s3.object.key.replace(/\+/g, " "),
    );

    console.log(`Bucket: ${bucketName}, Key: ${objectKey}`);

    // Get the object from the bucket
    // Need to grab the file's body
    const getObjectCommand = new GetObjectCommand({
      Bucket: bucketName,
      Key: objectKey,
    });
    const objectData = await s3Client.send(getObjectCommand);

    try {
      // Convert to ByteArray so sharp can process image. 
      const image = await objectData.Body?.transformToByteArray();
      // Optimize and resize images
      const outputBuffer150 = await sharp(image)
        .resize(150)
        .webp({ quality: 90 })
        .toBuffer();
      const outputBuffer800 = await sharp(image).resize(800).webp().toBuffer();
      // grab the image id 
      const keyId = objectKey.split(".")[0];
       // store new images in the destination bucket
      await s3Client.send(
        new PutObjectCommand({
          Bucket: destBucket,
          Key: `${keyId}_150x.webp`,
          Body: outputBuffer150,
          ContentType: objectData.ContentType,
        }),
      );
 
      await s3Client.send(
        new PutObjectCommand({
          Bucket: destBucket,
          Key: `${keyId}_800x.webp`,
          Body: outputBuffer800,
          ContentType: objectData.ContentType,
        }),
      );
      // Delete image from upload bucket after optimizing
      await s3Client.send(
        new DeleteObjectCommand({
          Bucket: event.Records[0].s3.bucket.name,
          Key: objectKey,
        }),
      );
      callback(null, `Successfully uploaded object: ${objectKey}`);
    } catch (error) {
      console.log(error);
    }
    console.log(`Object retrieved: ${objectData.ContentLength} bytes`);
    callback(null, `Successfully processed object: ${objectKey}`);
  } catch (error) {
    console.error("Error processing S3 event:", error);
    callback(error as unknown as string);
  }
};

S3 Utils

Lets setup our S3 update and destroy functions. First we will need to grab our Access Key from our S3 user. Go to users, then to the s3 user, then Security Credentials > Access keys > Create access key. Access keys

Now select Application running outside AWS, then click next.

Use Case

Add any descripton and click Create access key.

Use Case

After it will a modal will show you both your keys. Copy both the secret and access key and paste them in your .env file. Also add the S3 bucket names from earlier.

.env

AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_REGION=
S3_BUCKET_NAME=
S3_BUCKET_DEST=

in src/libs/s3/index.ts

import {
  PutObjectCommand,
  DeleteObjectsCommand,
  S3Client,
  type DeleteObjectsCommandInput,
} from "@aws-sdk/client-s3";
import sharp from "sharp";
import { randomUUID } from "crypto";
const client = new S3Client({ region: "us-east-1" });
export async function deleteBlogFiles({
  key,
}: {
  key: string;
}) {
  const [keyId] = key.split(".");

  const delTwo: DeleteObjectsCommandInput = {
    Bucket: import.meta.env.S3_BUCKET_DEST,
    Delete: {
      Objects: [
        { Key: `${keyId}_800x.webp` },
        { Key: `${keyId}_150x.webp` },
      ],
    },
  };
  const command = new DeleteObjectsCommand(delTwo);
  await client.send(command);
}
export async function uploadImageFile({
  file,
}: {
  file: File;
}) {
  const body = Buffer.from(await file.arrayBuffer());
  try {
    const sharpImage = sharp(body);
    const metaData = await sharpImage.metadata();
    const width = metaData.width;
    const height = metaData.height;
    if (!width || !height) {
      throw Error("Wrong aspect ratio.");
    }
    // Round the aspect ratio -- image can be a few pixels off
    // * Floats are very inaccurate
    if (Math.floor((width / height) * 10) / 10 !== 1.5) {
      throw Error("Wrong aspect ratio.");
    }
    if (width < 800) {
      throw Error("Image too small.");
    }
    if (!checkIsImage(file.type)) {
      console.log("not image");
      return undefined;
    }
    const id = randomUUID();
    let key = `${id}.${file.type.split("/")[1]}`;
    const command = new PutObjectCommand({
      Bucket: import.meta.env.S3_BUCKET_NAME,
      Key: key,
      Body: body,
      ContentType: `${file.type}`,
    });
    await client.send(command);
    return { key };
  } catch (e) {
    console.log(e);
  }
}
// Ensure file has correct file extension
function checkIsImage(type: string) {
  console.log(type.split("/"));
  switch (type.split("/")[1].toLowerCase()) {
    case "jpeg":
      return true;
    case "jpg":
      return true;
    case "png":
      return true;
    case "webp":
      return true;
    default:
      return false;
  }
}

While we could theoretically handle image optimization at this point, it's more efficient to use an event-based approach instead. Image optimization can be time-consuming, and there’s no need to make the user wait unnecessarily. However, one potential drawback of this approach is that the database may be updated before the images are fully processed and available.

Since our blog is statically generated, updates to the blog won’t go live until a redeploy occurs, which typically takes about a minute (or less with more compute power). This delay provides ample time for our Lambda function to optimize images and upload them to the images bucket. However, if redeploys were faster, we could encounter missing images, leading to a poor user experience.

One alternative would be to trigger our webhook directly from within the Lambda function. However, this approach would increase complexity because there would now be two sources triggering redeploy hooks: the Lambda function and the Astro app, which would need to handle updates without associated images.

CRUD

Post, Put, Delete Server logic Lets handle our POST logic src/libs/blog

import z from "zod";
import { uploadImageFile } from "../s3";
import { createBlog } from "../db/blog/queries";
import { createSlug, deployVercel } from "../utils";
const FormDataSchema = z.object({
  file: z.instanceof(File, {
    message: "Expected a File instance",
  }),
  blog: z.string(),
  title: z.string(),
  description: z.string(),
});

export async function postBlog({
  formData,
}: {
  formData: FormData;
}) {
  const data = {
    file: formData.get("file"),
    blog: formData.get("blog"),
    title: formData.get("title"),
    description: formData.get("description"),
  };
  const parsed = FormDataSchema.safeParse(data);
  if (!parsed.success) {
    return {
      success: false,
      error: "Incorrect form data.",
    };
  }
  const safeData = parsed.data;
  let imageKey = "";
  if (safeData.file) {
    let key: string | undefined;
    try {
      const resp = await uploadImageFile({
        file: safeData.file,
      });
      key = resp?.key;
    } catch (e) {
      console.error(e, "Upload Image");
      return { success: false, error: "" };
    }
    if (key) {
      imageKey = key;
    } else {
      return {
        succes: false,
        error: "Failed to upload image.",
      };
    }
  } else {
    return { success: false, error: "No image provided." };
  }
  try {
    const response = await createBlog({
      title: safeData.title,
      description: safeData.description,
      imageKey,
      blogContent: safeData.blog,
      createdAt: Date.now().toString(),
      updatedAt: Date.now().toString(),
      slug: createSlug(safeData.title),
    });
    if (response.rows) {
      await deployVercel();
    }
  } catch (e) {
    console.error(e);
    return {
      success: false,
      error: "Failed to create blog.",
    };
  }
  try {
    await deployVercel();
  } catch (e) {
    console.error(e, "Failed webhook.");
  }
  return { success: true };
}

This post function will first upload our image, if that succeeds it will write to our database, then redeploy our astro app.

/src/libs/blog/edit.ts

import {
  getBlogFromId,
  updateBlog,
} from "../db/blog/queries";
import { deleteBlogFiles, uploadImageFile } from "../s3";
import { createSlug, deployVercel } from "../utils";
import z from "zod";
const FormDataSchema = z.object({
  file: z
    .instanceof(File, {
      message: "Expected a File instance",
    })
    .optional(),
  blog: z.string(),
  title: z.string(),
  description: z.string(),
  id: z.string(),
});
export async function putBlog({
  formData,
}: {
  formData: FormData;
}): Promise<{ error?: string; success: boolean }> {
  const data = {
    file: formData.get("file") as File,
    blog: formData.get("blog") as string,
    title: formData.get("title") as string,
    description: formData.get("description") as string,
    id: formData.get("id") as string,
  };
  const blogData = FormDataSchema.safeParse(data);

  if (!blogData.success) {
    console.log(blogData.error);
    return {
      success: false,
      error: "Incorrect form data.",
    };
  }

  const { blog, title, description } = blogData.data;
  // Use same image if no file was uploaded
  // Optimize this
  if (data.file.size === 0) {
    try {
      updateBlog(Number.parseInt(data.id), {
        blogContent: blog,
        title,
        description,
        slug: createSlug(title),
        updatedAt: Date.now().toString(),
      });
    } catch (e) {
      console.log(e);
      return {
        success: false,
        error: "Failed to update blog.",
      };
    }
    try {
      await deployVercel();
    } catch (e) {
      console.error(e);
    }
    return { success: true };
  }
  let key: string | undefined = "";
  try {
    const res = await uploadImageFile({
      file: data.file,
    });
    key = res?.key;
    const blogResp = await getBlogFromId({
      id: Number.parseInt(data.id),
    });
    await deleteBlogFiles({ key: blogResp.imageKey });
  } catch (e) {
    console.error(e);
    return {
      success: false,
      error: "Failed to delete blog files.",
    };
  }

  if (key) {
    try {
      updateBlog(Number.parseInt(data.id), {
        blogContent: blog,
        title,
        description,
        slug: createSlug(title),
        imageKey: key,
        updatedAt: Date.now().toString(),
      });
    } catch (e) {
      console.error(e);
      return {
        success: false,
        error: "Failed to update blog.",
      };
    }
    try {
      await deployVercel();
    } catch (e) {
      console.error(e);
    }
    return { success: true };
  }

  return {
    success: false,
    error: "Failed to upload image.",
  };
}

Summary

We have completed setting up our database, configured IAM roles, deployed the infrastructure using AWS CDK, implemented the Post and Put logic for admin dashboard, and created a Lambda function to process images. Most of the backend has now been setup. In the next post we will create our pages and client side logic.