Self-hosted Firefox Sync Server

A couple of months ago I started setting up several services on my own servers to get rid of many third party dependencies like Google. Even though Mozilla is nothing like a big Mega-Corp I still like the idea of not depending on third parties ( or even if you do, that you can migrate easily to another provider).

In this post I will explain how I have set-up my own Firefox Sync Server. Most of my information has been extracted from here and here.

I, obviously, did some research online before I started doing something like this from scratch. I found several posts like this one or this one but all of them look like people just want to make things work without digging too much into how things really function. Indicators for this were the usage of FF_SYNCSERVER_FORCE_WSGI_ENVIRON or SYNCSERVER_FORCE_WSGI_ENVIRON were I could see that they were not really understanding what was happening under the hood.

Here you can find my docker-compose:

version: '3.5'

networks:
  world:
    external: true

services:

  syncserver:
    image: mozilla/syncserver:latest
    container_name: syncserver
    restart: on-failure
    networks:
      - world
    volumes:
      - /srv/syncserver:/data
    expose:
      - "5000"
    environment:
      - "SYNCSERVER_ALLOW_NEW_USERS=false"
      - "SYNCSERVER_PUBLIC_URL=https://your.fqdn.here"
      - "SYNCSERVER_SECRET=$SYNCSERVER_SECRET"
      - "SYNCSERVER_SQLURI=sqlite:////data/syncserver.db"
      - "SYNCSERVER_FORWARDED_ALLOW_IPS=127.0.0.1,172.18.0.2,172.18.0.1"
      - "SYNCSERVER_BATCH_UPLOAD_ENABLED=true"
      - "SYNCSERVER_FORCE_WSGI_ENVIRON=false"
      - "PORT=5000"
    labels:
      - "traefik.frontend.rule=Host:your.fqdn.here"
      - "traefik.docker.network=world"
      - "traefik.enable=true"
      - "traefik.frontend.passHostHeader=true"
      - "traefik.frontend.headers.STSPreload=true"
      - "traefik.frontend.headers.STSSeconds=31536000"
      - "traefik.frontend.headers.ForceSTSHeader=true"
      - "traefik.frontend.headers.STSIncludeSubdomains=true"
      - "traefik.frontend.headers.contentTypeNosniff=true"
      - "traefik.frontend.headers.frameDeny=true"
      - "traefik.frontend.headers.customFrameOptionsValue=SAMEORIGIN"
      - "traefik.frontend.headers.browserXSSFilter=true"
      - "traefik.frontend.headers.referrerPolicy=no-referrer"
      - "traefik.frontend.headers.contentSecurityPolicy=default-src 'self'; script-src 'self'"

In this setup I run syncserver behind traefik configured automatically via labels. Notice SYNCSERVER_FORCE_WSGI_ENVIRON=false. We do not need to set this to true due to the usage of SYNCSERVER_FORWARDED_ALLOW_IPS=127.0.0.1,172.18.0.2,172.18.0.1 and the header X-Forwarded-For that we receive via trafik (configured to do so via the label traefik.frontend.passHostHeader=true).

The SYNCSERVER_SECRET environment variable has been generated with the command:

head -c 20 /dev/urandom | sha1sum
And inserted in a .env file containing:

# This file is used to define environment variables to be used
# for variable substitution in your docker compose file.
# https://docs.docker.com/compose/env-file/
SYNCSERVER_SECRET=YOUR_SECRET_GOES_HERE

Now, in order for Firefox to attack our Sync Server, we need to set it up as follows (I blatantly copy from this blog post):

  1. Go to about:config and search for identity.sync.tokenserver.uri.
  2. Now change replace https://token.services.mozilla.com/1.0/sync/1.5 with https://yourawesomeurl.tld/token/1.0/sync/1.5. Don’t forget the token, because the self hosted Firefox sync server is exposing the token server in a subdicrectory.
  3. Just to make sure everything is set up correctly, log out of Firefox (if you logged in before) and restart the browser.
  4. Now go to the settings, login with your Firefox account and the synchronization can start.

As you might know already, we still need the Firefox Account service from Mozilla for all these to work. I am pretty sure I will be trying to set it up myself in a not too distant future… πŸ™‚

Setting up a blog…

So I have been thinking for a while on getting back to blog and I figured that I could try to set something up on my home server.

My resources are really limited so, even though I actually hate the trend, I decided to run my blog on Ghost.

I also wanted to have some sort of isolation on the things that run on my server thus I had two realistic options: docker or running traditional Virtual Machines. Remember I mentioned that my resources were limited? Thus, docker was the chosen option.

I didn’t want to serve the blog straight from docker, so I opted for running a web server in front (although this one is not running on docker πŸ˜‰ ).

So:

server {  
     listen 80;
     listen 443 ssl;
     server_name blog.marcdeop.com;
     ssl_certificate        /etc/letsencrypt/live/blog.marcdeop.com/fullchain.pem;
     ssl_certificate_key    /etc/letsencrypt/live/blog.marcdeop.com/privkey.pem;

     root /var/www/blog;

     location /.well-known {
          alias /var/www/blog/.well-known;
     }

     location / {
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header Host $http_host;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_pass http://localhost:2368;
     }
}

This is not a very complex nginx configuration, essentially it’s just a proxy. The most relevant thing might be the use of Let’s Encrypt ( maybe I write something about it some other time).

Now let’s have a look at how to create the container to run ghost.
Why not use the official image? Well, I actually can’t as my home server is ARM based. I did a quick research on-line and found easypi/ghost-arm and kennethlimcp/armhf-ghost. Neither of them suited me properly as the first one doesn’t have much information regarding how it’s built and the second one, even though looks very nice, builds the images based on Debian and their alpine version seems a little bit outdated. Besides, I got to practice a little bit on how to emulate ARM on my workstation :-).

I followed this guide to set up a VM with ARM emulation so I could quickly build my docker images ( ‘quickly’ is really not the right word, the VM is slow ).

I heavily based my work on the official image with slight changes. You can find the end result here.

Right now I am running the blog on a development version of the docker image but I will try to upload the final one to my recently created docker hub repository.

One the image was built, I only needed to run the container:
docker run -d --restart always -v $blog:/var/lib/ghost -e NODE_ENV=production -p 2368:2368 marcdeop/ghost-arm:0-alpine

The first entry of the blog as it’s looking right now…

I haven’t yet found a theme that I really like, this needs further investigation but so far… I am quite happy with the status.