How I built my Blog
6 minutes to readThe website you are reading this article was built with Elixir and Phoenix Web Framework. All the CSS layout is done with Tailwind CSS, and this specific layout is part of the Tailwind UI.
I will not expose everything about this website here because some parts are reaaaaally boring. I love boring tech and think of myself as being very skeptical about every new framework that pops up every day. Things like the phrases below:
Are you using Microservices?
Today, if you care about the business of your company, you will implement the solutions using blockchain
Micro Frontend is the solution for most of all the problems we have in front end development today.
Do you really need all of that? Maybe yes. But probably no. So, being very skeptical and pragmatic of what is the best tool for the job will always put you in the boring tech side because boring tech works.
Anyways, back to the main subject. As I said, I like boring tech and what would be more boring than a simple monolith using a well-established web framework that uses one of the best languages today regarding readability, maintainability, and happiness?
If you scream Phoeniiiiixxxx! Or Elixiiiiir! You are right. And I will show you some exciting parts of this very boring website.
It starts like this:
mix phx.new blog --live
This line of code generates a bunch of files and the project. The funny part is that I wrote more code in HTML with the tailwind CSS utilities to create this blog than in elixir.
This project only have one table so far and one “model” in the domain. Here it is:
defmodule Blog.Posts.Article do
use Ecto.Schema
import Ecto.Changeset
alias Blog.Posts.Article
alias Blog.Repo
schema "articles" do
field :summary, :string
field :body, :string
field :html_body, :string
field :slug, :string
field :title, :string
field :author, :string
field :date, :date
field :hash_id, :string
field :html_url, :string
belongs_to :category, Blog.Posts.Category
timestamps()
end
@required_fields [
:title,
:slug,
:body,
:date,
:summary,
:category_id,
:hash_id,
:html_url,
:author
]
@doc false
def changeset(article, attrs) do
article
|> cast(attrs, @required_fields)
|> validate_required(@required_fields)
end
def new_post(params) do
struct!(__MODULE__, params)
end
def get_articles() do
Repo.all(Article)
end
def get_articles_order_by(order_key) do
Repo.all(Article, order_by: [:asc, order_key])
end
end
As you can see, it is a straightforward model, and there is nothing different about it. The important fields that I want to highlight for this article are the slug
, title
, body
, summary
, hash_id
, and html_body
. You will see it in action in a little bit.
So, how do I store the articles? If you go here you will see a list of the articles on the website. Many people don’t want to deal with databases when creating blogs and decide to make other approaches. But, today, having a tiny database does not cost you anything, and I decided I would like a database for my blog, by the way I am using Ecto and Postgres. But how and when do I insert the articles into the database? This is the funny part. But first…
Use mardown and do not store your .md files in your blog
I want to write my articles using markdown because of all the benefits you get when dealing with headers, changing text to bold, to italics, or strike as well as other functionalities. Markdown is also easily converted to HTML and it is less complex than HTML itself.
The most important reason why I don’t put my .md
files inside the project is that I will need to deploy a new version of my website every time I want to push a new article to the repository (assuming I am using some CI infrastructure and not deploying my code by hand. Afterall we are in 2022.)
Now you say: Ok Thiago! I will not put the files in my project, but where should I put them?
I recommend Github. Yes! Create a new repository and add your files and articles there. And add your custom images to your google drive. It’s easy to get the image links on your posts.
That’s what I did. I created a new repository on Github and added my articles there. Well, 1 article so far, or 2 if you count this one. With that part done, I used the GitHub API to get the articles, and every day, there is a code that checks if something new exists and inserts or updates the articles already in my database.
To make this work I created a GenServer that does the following:
- It starts when I deploy a new version
- Goes to the Github repository where I store my articles
- Get all of them (I am not worried about the quantity right now. As you saw I only have two.)
-
And does a simple upsert into the
articles
table using Ecto. It uses theslug
to check if the article exists or not and if it exists it updates some fields, otherwise it will insert a new article. - After this work is done it schedule a process to run again after 24 hours.
It is that simple.
Here is the code that does it:
defmodule Blog.Writer.PostWriter do
use GenServer
alias Blog.Posts
alias Blog.Posts.Article
alias Blog.Repo
@url "https://api.github.com/repos/thiagoramos23/second-brain/contents/second_brain/Projects/guides"
@branch "main"
def start_link(_args) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
def init(state) do
{:ok, state, {:continue, :get_and_write}}
end
def handle_continue(:get_and_write, state) do
get_posts_and_upsert()
schedule_work()
{:noreply, state}
end
def handle_info(:scheduled_work, state) do
get_posts_and_upsert()
schedule_work()
{:noreply, state}
end
def get_posts_and_upsert() do
articles_on_github = posts_on_github()
upsert_articles(articles_on_github)
end
defp schedule_work do
Process.send_after(self(), :scheduled_work, 24 * 60 * 60 * 1000)
end
defp upsert_articles(new_articles) do
on_conflict = {:replace, [:title, :body, :html_body, :summary, :hash_id]}
Repo.insert_all(Article, new_articles, conflict_target: [:slug], on_conflict: on_conflict)
end
defp posts_on_github() do
post_category = Posts.get_category_by_slug("posts")
do_request(@url)
|> Enum.map(fn item ->
item
|> get_post(post_category)
end)
|> Enum.sort_by(& &1.slug, &>=/2)
end
defp get_post(
%{
"download_url" => donwload_url,
"html_url" => html_url,
"sha" => hash_id,
"name" => name
},
post_category
) do
[post_metadata, post_content] =
donwload_url
|> do_request()
|> split_metadata_and_content()
metadata = post_metadata |> parse_metadata()
params =
Map.merge(metadata, %{
body: post_content,
html_body: Earmark.as_html!(post_content),
html_url: html_url,
hash_id: hash_id,
date: Date.from_iso8601!(metadata.date),
category_id: post_category.id,
slug: name |> String.split(".") |> hd()
})
Article.new_post(params)
end
defp split_metadata_and_content(post_content) do
String.split(post_content, "delimiter\n")
end
defp parse_metadata(metadata) do
metadata
|> String.split("\n", trim: true)
|> Enum.reduce(%{}, fn item, acc ->
[key, value] =
item
|> String.split(":")
Map.put(acc, String.to_atom(key), String.trim(value))
end)
end
defp headers() do
[{"Authorization", "Bearer #{token()}"}, {"Accept", "application/vnd.github+json"}]
end
defp token do
System.get_env("GITHUB_TOKEN")
end
defp do_request(url) do
url
|> Req.get!(headers: headers(), params: [ref: @branch])
|> then(& &1.body)
end
end
Here it is where I get the articles from Github and parse them into Article
:
defp get_post(
%{
"download_url" => donwload_url,
"html_url" => html_url,
"sha" => hash_id,
"name" => name
},
post_category
) do
[post_metadata, post_content] =
donwload_url
|> do_request()
|> split_metadata_and_content()
metadata = post_metadata |> parse_metadata()
params =
Map.merge(metadata, %{
body: post_content,
html_body: Earmark.as_html!(post_content),
html_url: html_url,
hash_id: hash_id,
date: Date.from_iso8601!(metadata.date),
category_id: post_category.id,
slug: name |> String.split(".") |> hd()
})
Article.new_post(params)
end
As you can see the code is not very polished but it works and I am ok with it the way it is right now. The main part to focus here is this one:
defp upsert_articles(new_articles) do
on_conflict = {:replace, [:title, :body, :html_body, :summary, :hash_id]}
Repo.insert_all(Article, new_articles, conflict_target: [:slug], on_conflict: on_conflict)
end
I am using the Eamark
library to transform the markdown into HTML. So, after getting the articles in GitHub, I am using the Repo.insert
using the on_conflict
to update some fields if there is already an article with the given slug
. Otherwise, I am inserting the new article. There is plenty of work that I could do to improve this, but it will only bring me little value. And I have other things to focus on. This is a decision that my past self would have different opinions. Nowadays, I think more about return over time invested.
The last step is to schedule to work to run again and this is done by using the Process
module:
defp schedule_work do
Process.send_after(self(), :scheduled_work, 24 * 60 * 60 * 1000)
end
Conclusion
In this article, I wanted to show you how I created some parts of the blog you are reading right now. I also wanted to show you that done and deployed is better than perfect and not ready. You can continuously improve, and you can always iterate.
Phoenix and Elixir are the best choices to build a reliable, scalable web application in 2022. Of course, this blog does not represent any of that, but I made it in one day. I deployed into fly.io the next day, which was super easy to do, and it is working very well so far. This blog does not cost me any money because Fly.io
has an excellent free tier that I can use and works great.
So, the next time you want to choose a stack to develop your blog or your e-commerce platform, or your HRIS platform that has thousands of users in more than 60 countries as we do at Remote, keep in mind that you can choose Elixir and Phoenix.
I will leave you with this quote from Seneca for you to think about:
“They lose the day in expectation of the night, and the night in fear of the dawn.”