Almost a month ago, I decided to totally rebuild my website. I had created my old website using pure html and hosted it on AWS S3. While this approach made sure I had a website, it was very hard to update it because of all the boilerplate code to ensure that the pages look consistent. In fact, it was so cumbersome, my website’s footer showed the year 2017 before I tore it down. In this post, I want to share my experience with creating this website using Hugo and then using IPFS to host it.

Hugo

Using a static site generator, you can separately write your website template and your content. The static site generator can then generate the html files by combining the two. This lets you create a website with consistent pages without any boilerplate code. There are several static site generators like Jekyll, Gatsby and Next.js. However, I chose to go with Hugo. I didn’t have experience with any of the mentioned static site generators, so I chose Hugo solely based on the fact that it is written in Go and I had nothing but good experience with that language. I started watching some videos on YouTube and quickly got a hang of it. The next step was browsing themes on the Hugo Themes page.

After using Hugo Coder for a week, I finally settled with hello-friend-ng (at least for now). Transitioning to the new theme was very easy - another reason to use a static site generator. I forked it before using it so that I can make changes to the theme myself down the road. I already made several changes and feature additions to make this website look the way I want and also integrated the Utterances comment system and Formspree Contact Form.

IPFS

InterPlanetary File System (IPFS) is a peer-to-peer network for sharing files. An important difference between IPFS and a server serving files is the fact that IPFS doesn’t use file location paths (based on the file path on a particular server) to locate files. Instead, it uses a cryptographic hash that depends on the content of the file. For example, a file with the content Hello, World! will always have this hash: QmYAXgX8ARiriupMQsbGXtKdDyGzWry1YV3sycKw1qqmgH. This is a revolutionary idea because it allows you to refer to file content without tying it down to a machine. That makes a lot of sense because usually people care about the content - not where the content resides. So, when a node in the IPFS network wants to access a file, the network finds a node which has the file and serves it. This means that if a particular node goes offline, the file can still be obtained as long as there is another node who has it. As a demonstration, you can use the example hash right now and since Hello, World! is so common, there is a fair chance that you will get the content from another machine even though I am not running the IPFS daemon I used to come up with that hash. You can also use a gateway to get the content in case you don’t have IPFS installed. More about it later.

After playing around with IPFS using the IPFS command line app, I decided to use Pinata to pin the files generated by Hugo. Pinning is basically the action of adding a file and telling the IPFS daemon to keep it around in case someone needs it. You can pin a file or directory on your machine. However, what happens if your machine goes offline and no other node decided to pin it (which will most likely be the case)? So, Pinata is like the AWS S3 in the IPFS world. You can also pin on your machine or other pinning services at the same time if you want to be sure that your website will work even if Pinata is down.

So, after being able to generate the files for the website and also host them, the next step was to go around telling everyone something like, “Hey visit my website. Install IPFS and get the content with hash QmcLme5cC4wPVBcqB55ZmFzu8G3cNa98cKYugWQcJ7YSDG”. Unfortunately, that’s not acceptable.

So, there are two problems. First one is the need to install IPFS. You can’t expect everyone to have IPFS installed. To solve that, we can use a gateway. A gateway is just a server which is also an IPFS node. So, you can put the content hash in a HTTP request and it will serve the content. While this takes us back to the centralized world, this is the only solution for the time being since web browsers still don’t natively support IPFS. However, a slightly savvy user can use any of the many gateways if one is down.

We still have another problem to address - the requirement to give out hashes. This problem sounds similar to a more commonly known problem - the need to use IP addresses to access websites. So, there is a similar solution to it - DNS. Actually, the hash problem is even worse because it will change every time you make a change to your website. There is a solution called IPNS (InterPlanetary Name System) which still uses a hash but it is fixed and you need to update the content hash every time you make a change. However, we don’t want to give people any hash (unless it’s hash browns). This is where DNSLink comes in. DNSLink works by adding a TXT record on the DNS zone and the IPFS node then uses the content hash in the record’s value. For example, _dnslink.shakhar.com currently has a value of dnslink=/ipfs/QmcLme5cC4wPVBcqB55ZmFzu8G3cNa98cKYugWQcJ7YSDG (before publishing this post). This means you can reach this website via a gateway or another gateway.

Cloudflare

I am using Cloudflare to manage my DNS zone and also as my IPFS Gateway. Cloudflare is known for their Content Distribution Network (CDN) service. This is great because on Cloudflare’s DNS management page, it gives an option to proxy to the target IP. This means that regardless of the target IP, it will really point to an IP pointing to Cloudflare CDN which will then serve the traffic either from the target or from it’s own cache - resulting in better performance if the site is requested a lot. That’s that’s not the only benefit. Cloudflare provides analytics data and also allows one to set up page rules. I am using a page rule to always redirect HTTP requests to HTTPS. However, I was disappointed about just one thing. I wanted to use a CNAME record to point to cloudflare-ipfs.com. But due to the orange to orange problem, I wasn’t allowed to have it proxied. So, I just used the A records for cloudflare-ipfs.com and it’s working so far (just hoping that they will never change those IPs). The nice thing about using cloudflare-ipfs.com as the gateway is that Cloudflare takes care of including the domain name (shakhar.com, *.shakhar.com) in the SSL certificate’s Subject Alternative Names so that https://shakhar.com works without any errors.

IPFS Publisher

So, everything was working at this point. However, updating the website was a bit cumbersome. Every time I wanted to make a change to the website, I had to do the following:

  • Use Hugo to generate the website
  • Login to Pinata and then upload the website directory
  • Login to Cloudflare and then update the dnslink TXT record

So, I decided to write a simple bash script as part of the IPFS Publisher project to take care of all these. This script expects a file named publish.yaml where one can specify the details about the project setup. Then, publishing is just a matter of typing publish and pressing enter on a bash terminal. I also added an action.yml file to the project so that it can be used as a GitHub Action. This allows me to publish by just doing a commit and push to my website’s repository. The nice thing about using GitHub Actions is that I can use this setup even if I ever decide to use a Windows machine.

Conclusion

This is a broad overview of the tools I used to create this website and also how I am maintaining it. I hope that this post will serve as a guide for anyone looking forward to hosting a website on IPFS. Feel free to check out the repository for this website for further guidance.