Hosting Your Own Media Server

Travis Avatar

Intro

With the rise of AI and companies increasingly wanting to scan your pictures, videos, and files, I decided to look into self hosting my own Media Server.

A few things I wanted:

  • A way to listen to music I have legally downloaded and keep my playlist and favorites intact, no matter what device I was using.
  • A way to stream my legally obtained videos.
  • A way to keep my photos in an organized and shareable library.
  • My own Google Drive/Dropbox that I can hold my files and share with friends and family.

I was able to successfully achieve this by setting up my own Media Server with Ubuntu Server and Docker Containers. A scary setup for sure, but it was well worth the effort.

Prerequisite Knowledge In Order To Follow This Guide

Here are some prerequisites that I am going to assume you know:

  • How To Use Bash (Linux Terminal)
    • If you don’t know how to navigate using commands in a Linux terminal such as cd, ls, pwd, chmod, nano/vi(m) then a media server is probably not for you.
    • Likewise, do you know what and where ~/ is?
  • SSH
    • Hand in hand with Bash, if you don’t know how to SSH, I don’t think you should make a media server.
  • You can’t have the mind set of “I only want to use a GUI Interface”:
    • You are literally going to setup a server that you won’t have a monitor, keyboard, or mouse plugged into after setup. Why would you be wasting resources on showing a GUI interface and managing updates for it?
  • Knowledge and the ability to port forwarding
    • You must know how to port forward. And you must be able to. Not having access directly to the router will not work.

Hardware I Have Used

If you are wanting to set up your own server, you will obviously need a computer. I first started using my desktop with Windows 10 that only hosted my music. As my needs were raised, I decided to look into having my own Ubuntu server so I decided to look into using an old PC that I used in college. Here are the stats of that old PC

My Original Server Setup From My Old College PC

I oringally used my old college PC from 2012 as my media server. For reasons stated in the next section, I decided to get a newer computer to use as my media server. But here were the specs:

Why I Decided To Make A New Server Build

After running my server for six months straight, I started to notice issues with my 12 year old hardware.

  1. The power consumption was enormous (and thus the cost). Using a ​​Watt Meter, I estimated that it cost me $0.76 a day to run my original college server (it had two graphics cards at first), idling at 331 Watts. Once I removed one of those, it would idle at about 186 watts. Newer CPUs can idle around 5 Watts.
    1. Before:
    2. After:
  2. The CPU couldn’t handle running all of my services. At the time I ended running my server, I was running Immich (Photos), Navidrome (Music), Jellyfin (Video), Nextcloud (File Storage), Gramps (Ancestry), Mealie (Recipe Tracker), Netdata (statistics), and Caddyfile (for my reverse proxies). More specifically, it couldn’t keep up with my photos and identifying people and objects.
  3. I was running out of storage space. I could have upgraded the HDDs, but I figured I would just upgrade the system at the same time.

My New Build

My new and current build’s hardware consists of:

It idles at 25 watts, in comprasion to my 186 watts I was using on my college PC. It saves both the environment and me more money!

What Is A Docker Container

In order to understand how my Media Server runs, you need to know what a Docker Container is. A Docker Container is like a virtual machine, although it isn’t emulating an operating system. Rather, it keeps all of your software contained. If one of these containers is compromised by an unauthorized source, only that container is affected. It can’t access anything outside of that container.

For example, let’s say for my Navidrome container (Music Server), it only has access to my /mnt/media1/navidrome-data folder, that’s it. In fact, it thinks that all of my data is at /data, which isn’t even a directory on my server. In essence, it fools anything in the container to think that a folder is somewhere, even if it isn’t. And it can’t access anything outside of it. This is an oversimplified way to explain it, but it does the same things with port numbers. If you have two things running on port 80, you can tell it to expose the port on something else (because you can only run one thing on one port).

Another nice thing about Docker Containers is that they start up automatically when you reboot or turn on your computer.

Common Docker Compose Commands

Here are some common comands that are useful when using Docker Containers. They are all ran inside of the directory of the docker-compose.yml files.

  • Start a container and read it’s output:
    • sudo docker compose up
  • Start a container detached (it will run on it’s own and start up next reboot):
    • sudo docker compose up -d
  • Shut down a container
    • sudo docker compose down
  • Restart a container
    • sudo docker compose down && sudo docker compose up -d
  • Update a container
    • sudo docker compose pull && sudo docker compose up -d
  • Delete a container
    • sudo docker compose down -v
  • Run bash commands inside of a container
    • Unlike the previous commands, this can be done from anywhere.
    • 1) Get the id of your container by typing sudo docker ps
    • 2) Run sudo docker exec -it <container_name_or_id> /bin/bash

Updating a Docker Container

  • I highly recomend keeping your Docker Containers up to date.
  • All updates and patch notes can typically be found at their Github such as Immich’s at https://github.com/immich-app/immich/releases.
  • I personally subscribe to events through the Github repository main page for releases and security update alerts for anything I use.
  • As mentioned previously, type sudo docker compose pull && sudo docker compose up -d
  • The command above works for all Docker Containers (as long as you used the #latest version in the Docker Compose file)

Software Used

I run a ton of things on my media server. Below you will find a list of all of the things I use, and what my Docker files look like and what I use it for.

Porkbun – Domain Name

I originally used Google Domain to purchase my domain, but they are no longer in operation (as all Google products end up being shutdown) so I instead recommend using PorkBun to purchase a domain name (such as menghini.org).

Installing Ubuntu Server

  1. Obtain Ubuntu Server at https://ubuntu.com/server
  2. Use a program such as Etcher to install it to a USB Drive.
  3. Install Ubuntu server (the first option) when asked (not minimized or third-party drivers)
  4. The nextscreen is network connections. Connect it to the internet
  5. Then it is proxy settings
  6. Next screen is an archive mirror.. I just clicked Done
  7. The next screen is a guided storage configuration. This is what hardisk you want it on. I choose the entire disk with the LVM group. This will allow you to use multiple hard disk in the future and it will act as one (two, one TB hard disk could be seen as 1 hard disk that is 2TB)
  8. The next screen asks about the storage configuration. I chose to only use 100G (the default). More info can be found at https://ubuntu.com/server/docs/install/storage 
  9. When you are asked for a username, consider that it will be username@serverName. I don’t think your actual name matters.
  10. I skipped Ubuntu Pro
  11. It will ask you if you want to install OpenSSH. You should do so.
  12. It will eventually ask if you want Docker amongst MANY options. You should install Docker. This will install Docker and DockerCompose. You will want this so that you can download things easier.
  13. Once you are done, you should be done and ready to reboot.
  14. As soon as you are logged in, you should be able to ssh into the computer using Putty.
    1. Type ip addr to get your ip address
  15. To shutdown, you need to type sudo shutdown now
  16. If you are using a laptop, you may not want the computer turn go to sleep or turn off when shut as you probably won’t be sitting on it. If so do this (credit to this)
    1. Edit the /etc/systemd/logind.conf file
      • sudo vim /etc/systemd/logind.conf
    2. You want to uncomment the #HandleLidSwitch=suspend line and change it to HandleLidSwitch=ignore
    3. Also, uncomment the #LidSwitchIgnoreInhibited=yes line and change it to LidSwitchIgnoreInhibited=no
    4. Also, uncomment the #HandleLidSwitchDocked=suspend line and change it to HandleLidDockedSwitch=lock
    5. Run sudo systemctl restart systemd-logind.service
  17. You should be all set to go. I strongly recommend setting a static IP address on your router to the server.
  18. I would also recommend you take a look at your BIOS settings and set up your server to turn on automatically when it receives power.

Assessing the Server

Te remote into your server, you should consider the following programs:

  • On a Windows Machine
    • SSH – Putty
    • Transfer Files – WinSCP
  • On a Mac
    • SSH – Regular Terminal
    • Transfer Files – Cyberduck
  • Linux
    • SSH
    • Transfer Files – SSH with your file browser
      • To connect with another Linux server in Zorin go to your file manager and click “Other Locations”
        • In connect to server type sftp://username@ipaddress/
  • Outside of Network
    • I port forward port 22 to certain IP addresses (work, parents house). You should also have an A record without a proxy to do this such as ssh.menghini.org.

Updating Ubuntu Server

  1. Type to refresh the packages:

sudo apt update

  1. Type to upgrade:

sudo apt upgrade

  1. If there was a kernel update you can type uname -r to see your current kernel version before rebooting (to verify that it worked). Write down that number to verify that it worked.
  2. Reboot by typing:

sudo reboot

  1. If there was a kernel update you can type uname -r to see your current kernel version after rebooting (to verify that it worked). You should see a different number now.

Mount a Hard Disk

If you are wanting to install RAID 1, you should see that section: Setup Software RAID 1 Using mdadm

If you want to mount another hard disk. You will do it now:

  1. You will have to partition your harddisk to ext4 using gparted 1.4.06-amd64.iso (AND NOT NEWER) before you do this.
  2. Run lsblk to see what hard disc you have
  3. You need a directory to serve as the mount point for the new disk. You can create this anywhere, but traditionally it’s done in /mnt or under /media.
    1. I want to to use sda/dsa1. You may read something about mounting in /mnt vs /media.
    2. The general rule is
      1. use /mnt for stuff you mount by yourself
      2. leave /media for the system to mount its stuff
      3. This doesn’t really matter, but if you were using Ubuntu Desktop, some of this would be automatic when plugging in a flashdrive.
    3. So we will use /mnt.
  4. I mounted mine by using this command sudo mount /dev/sda1 /mnt/media1
    1. You can verify that this worked with df -h or lsblk
  5. Now you want this to mount every time you boot. You will need the UUID of the disk so type blkid to find that.
  6. You will need to edit /etc/fstab using your favorite text editor such as vim using sudo.
  7. Add the following line 

UUID=whateverYourUUIDIs /mnt/media1 ext4 defaults 0 2

media1 is your mnt directory

  1. It will look like this
  1. Run sudo reboot to see if it works
    1. If it doesn’t work (you will reboot into an emergency mode), like mine did it might be because you are getting this error:  /dev/disk/by-uuid/2b48a758-62b2-45a3-8b59-0be595c44512 has unsupported features: FEATURE_c12
      1. If that is the case your e2fsprogs program is out of date this… and in my version of ubuntu, I couldn’t update without compiling it myself
      2. I had to download an older version of gparted in order to fix it
  2. Now in order to use it you will need to type sudo chmod -R 777 /mnt/media1. Unless of course, you like to type sudo a lot.
  3. You can see what is currently mounted with sudo parted -l

Setup Software RAID 1 Using mdadm

I used RAID 1 with two harddisk so that they are always mirrored. If one breaks, it is unlikely that they will both break at the same moment, so I have time to replace the other.

Install RAID Array

The steps below are from https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup 

  1. Install mdadm:
    1. sudo apt install mdadm
  2. Reboot your machine by typing sudo reboot
  3. Type sudo fdisk -l to see all of your disc. They should have no Disklabel.
    1. This one has a disklabel of gpt.. That’s a NO
      1. In that case, type (where X is the drive you are deleting)
        1. sudo shred -v -n1 -z /dev/sdX
          1. This command only needs to start. It doesn’t have to do the entire disk. Replacing parts of it will render the entire thing useless, which is good enough.
  4. Now they are blank, if they are it would have no disklabel type such as the picture below:
  5. Use gdisk to create a GPT partition on both disks. For example:
    1. sudo gdisk /dev/sdb
      1. Press n to create a new partition.
      2. Choose a partition number (usually 1).
      3. Set the size (press Enter to use the default).
      4. Choose a partition type (press Enter for the default).
      5. Press w to write the changes.
  6. Do the steps again for the 2nd (for me was /dev/sbc)
  7. Now, you can use mdadm to create the RAID 1 array with the two partitions you created:
    1. sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
      1. /dev/md0 is the name of your RAID device.
      2. –level=1 specifies RAID 1 (mirror).
      3. –raid-devices=2 indicates that there are 2 devices in the array.
  8. You can check the status of your RAID array using:
    1. cat /proc/mdstat
  9. You can now create a filesystem on your RAID array. For example, to create an ext4 filesystem:
    1. sudo mkfs.ext4 /dev/md0
  10. You can now create a filesystem on your RAID array. For example, to create an ext4 filesystem:
    1. sudo mkdir /mnt/myraid
    2. sudo mount /dev/md0 /mnt/myraid
  11. To ensure the RAID array is mounted at boot, add an entry to your /etc/fstab file:
    1. echo '/dev/md0 /mnt/myraid ext4 defaults 0 0' | sudo tee -a /etc/fstab
  12. I rebooted my computer
  13. When I restarted my computer, it broke. Therefore I had to do the following:
  14. After it failed to mount the disc after a minute and a half, I had to go into emergency mode and run mdadm --detail --scan
    1. The result was: “Array /dev/md/menghiniserver:0 mdetadata=1.2 name=menghiniserver:0 UUID=...
  15. I than ran: sudo mdadm --detail /dev/md/menghiniserver:0 to get the status of my RAID. It worked.
  16. I then ran sudo mount /dev/md/menghiniserver:0 /mnt/media1
  17. I then edited my /etc/fstab file so it would boot correctly:
    1. /dev/md/menghiniserver:0  /mnt/media1  ext4  defaults  0  0

If that didn’t work…

  1. I was getting this error on reboots: a stop job is running for availability of block devices on reboot
  2. My name of my raid device kept changing so I decided to do this instead
  3. I ran blkid and found the UUID for the /dev/md0 mount.
  4. I then used sudo vi /etc/fstab and added the UUID instead

Monitor the RAID Array

Use mdadm to monitor and manage the RAID array:

  1. To check the status: sudo mdadm --detail /dev/md0
  2. To add or remove devices: sudo mdadm --manage /dev/md0 --add /dev/sdd1 (for example)
  3. To stop the array: sudo mdadm --stop /dev/md0

You will also be able to montior this, with email updates through a software called Netdata covered in the next section.

Netdata (Monitoring Software)

I use Netdata to monitor my server remotely. It will email you and send you a push notification for any critical issues.

To install, I did this

  • Type: wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh
  • You can see your server locally at http://ipaddress:19999/
    • You can find your ip address by typing ip addr in your terminal.
  • If you want to monitor the CPU temperature in Netdata, you can install that by typing:
    • sudo apt install lm-sensors
    • Then type
      • sudo sensors-detect
    • Then it should show up in Netdata
  • To start Netdata, run sudo systemctl start netdata .
  • To stop Netdata, run sudo systemctl stop netdata .
  • To restart Netdata, run sudo systemctl restart netdata .

Cloudflare (Hiding Your IP Address)

I use Cloudflare’s free tier to hide my IP Address. Why do I do this? To avoid DDOS attacks. With Cloudflare, it will use a proxy so that it connects to Cloudflare first. If you are receiving too many requests, it will start to do a robot check with its users before connecting to you.

This is how I set it up:

  1. Go to https://www.cloudflare.com/ and make a free account.
  2. Type in your top level domain name such as menghini.org and then follow the prompts to connect your domain.
  3. When I set up mine, I had ERR_TOO_MANY_REDIRECTS when connecting to any of my services. In order to fix this, I had to go to the SSL/TLS overview settings and change it to “Full” from the default which was “Flexible”

SSH (or other ports) Using Cloudflare Or Uploading More Than 100MB

Cloudflare only allows these HTTP ports

  • HTTP Ports: 80, 8080, 8880, 2052, 2082, 2086, 2095
  • HTTPS Ports: 443, 2053, 2083, 2087, 2096, 8443

When using their proxy and only allows you to upload 100MB at a time with a single file. You cannot change this. This becomes an issue with software such as Immich (Photos) and Nextcloud (Storage) as I tend to upload videos that are larger than 100MB.

If you are wanting to SSH or do other port forwarding for things such as games, you will need to have a A record that will directly go to your IP without the proxy. There is no work around.

If you are going to allow SSH from outside of your network, I highly recommend that you only port forward it from approved IP addresses, otherwise, anyone can try to connect to you without any DDOS protection. Which means, someone can guess your password (you should use a passphrase).

Caddy (SSL Certificates, Reverse Proxy)

First and foremost, you will want to run Caddy. Although you can use Caddy for many different things, the two things I use it for are Reverse Proxy and automatic SSL Certificates, which both come in the same line. In order to not expose ports, Caddy will use a reverse proxy to change a port such as 2283 to be 80 (HTTP) and 443 (HTTPS) and run on something like photos.menghini.org, rather than having to type in an ip address (which are not allowed to have SSL Certificates) and exposing port numbers.

To install Caddy and with an example, here are the directions: 

  1. Make a directory called caddy-app in your home directory
  2. In that directory make a file called docker-compose.yml with this contents.

version: “3.9”

services:

  caddy:

    image: caddy:2-alpine

    restart: unless-stopped

    ports:

      – “80:80”

      – “443:443”

      – “443:443/udp”

    volumes:

      – ./Caddyfile:/etc/caddy/Caddyfile

      – caddy_data:/data

      – caddy_config:/config

volumes:

  caddy_data:

  caddy_config:

  1. The highlighted portion can be changed out to a different version, if desired.
  2. You now have to make a file called Caddyfile in the same directory as docker-compose.yml that has the following contents. Obviously, change the file as needed.

# Global options (if any)

# Define a block for photos.menghini.org

photos.menghini.org {

    reverse_proxy 192.168.1.101:2283

}

# Define a block for music.menghini.org

music.menghini.org {

    reverse_proxy 192.168.1.101:4533

}

# Refer to the Caddy docs for more information:

# https://caddyserver.com/docs/caddyfile

  1. This will allow whatever is at port 2283 to be accessed at photos.menghini.org and likewise for my music.
  2. To note, 192.168.1.101 is the static ip address of my server on my internal network.
  3. Run sudo docker compose up -d
  4. This command starts a Docker Container detached, so it will run on its own and you are free to close your terminal. For debugging purposes, I will sometimes run it without the -d to see any errors.
  5. If you have issues and need to restart it you should run sudo docker-compose restart caddy
  6. You can also turn it off by saying sudo docker-compose down in the directory
  7. All docker containers are started and stopped in the directory where docker-compose.yml lives.
  8. At this point, it should start correctly. Note that you will need to port forward port 80 and 443 on your router to point to the ipaddress of the server.

Immich (Photo Storage)

Immich is flat out Google Photos. It is made to mimic every feature. As far as I am aware, every feature that Google Photos has, Immich has it too. The biggest strength is: Google isn’t scanning through your photos. I used to tell my students “No Google Employee is going to look through your pictures”. However, how big AI has become, I don’t trust it as much as I used to.

For the next few steps, I followed the guide at https://immich.app/docs/install/docker-compose, but it will be also written below.

  1. Create a directory of your choice (e.g. ./immich-app) to hold the docker-compose.yml and .env files. Mine is in my home directory.

mkdir ./immich-app

cd ./immich-app

  1. Download docker-compose.yml and example.env, either by running the following commands:

wget https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml

  1. Get .env file

wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env

  1. Verify that the files are there with ls -la
  2. Populate the .env file with custom value, if desirable using your favorite text editor (vim)
    • Populate custom database information if necessary.
    • Populate UPLOAD_LOCATION with your preferred location for storing backup assets.
      • I picked /mnt/media1/immich-library
    • Consider changing DB_PASSWORD to something randomly generated
  3. Start the containers (you may need sudo)
    • sudo docker compose up -d

Installing Jellyfin

  1. Make a directory in your home directory called jellyfin-app
  2. Personally, I made a folder called jellyfin-data in /mnt/media1 with the folders, /config, /cache, /media, /media2. I made these with sudo so root owned them
  3. I also made /media/movies and /media/shows
  4. Make a file called docker-compose.yml with the following contents. Change it as needed

version: ‘3.5’

services:

  jellyfin:

    image: jellyfin/jellyfin

    container_name: jellyfin

    #user: uid:gid

    network_mode: ‘host’

    volumes:

      – /path/to/config:/config

      – /path/to/cache:/cache

      – /path/to/media:/media

      – /path/to/media2:/media2:ro

    restart: ‘unless-stopped’

    # Optional – alternative address used for autodiscovery

    environment:

      – JELLYFIN_PublishedServerUrl=http://example.com

    # Optional – may be necessary for docker healthcheck to pass if running in host network mode

    extra_hosts:

      – “host.docker.internal:host-gateway”

  1. I commented out the user one because I just use the default user (root).
  2. Run docker compose up -d
    1. If you have issues and need to restart it you should run sudo docker-compose restart jellyfin

Installing Navidrome

Navidrome is a server for music files. Basically, it’s your own Spotify.

I followed directions at https://www.navidrome.org/docs/installation/docker/, but they are also listed below.

  1. I created a directory in the home directory called /navidrome-app
  2. I created a docker-compose.yml

version: “3”

services:

  navidrome:

    image: deluan/navidrome:latest

    user: 1000:1000 # should be owner of volumes

    ports:

      – “4533:4533”

    restart: unless-stopped

    environment:

      # Optional: put your config options customization here. Examples:

      ND_SCANSCHEDULE: 1h

      ND_LOGLEVEL: info  

      ND_SESSIONTIMEOUT: 24h

      ND_BASEURL: “”

    volumes:

      – “/path/to/data:/data”

      – “/path/to/your/music/folder:/music:ro”

  1. Run sudo docker compose up -d
  2. Navidrome requires that files have metadata piror to uploading them. You should use Mp3tag to make sure that any and all music you are uploading have a title, album name, and album artist, otherwise, it won’t combine music into albums.
  3. I personally use Sonixd on my PC. Unfortunately, it is no longer mainted in favor of Feishin (which has less features and is worse in every way). On my iOS device, I use Amperfy to listen to my music.

Installing Gramps

Gramps is a geology software.

1. Create a new file on the server named docker-compose.yml and insert the following contents: docker-compose.yml. Note: /mnt/media1/ is where you want the files to be stored.

version: “3.7”

services:

  grampsweb: &grampsweb

    image: ghcr.io/gramps-project/grampsweb:latest

    restart: always

    ports:

      – “5000:5000”  # host:docker

    environment:

      GRAMPSWEB_TREE: “Gramps Web”  # will create a new tree if not exists

      GRAMPSWEB_CELERY_CONFIG__broker_url: “redis://grampsweb_redis:6379/0”

      GRAMPSWEB_CELERY_CONFIG__result_backend: “redis://grampsweb_redis:6379/0”

      GRAMPSWEB_RATELIMIT_STORAGE_URI: redis://grampsweb_redis:6379/1

    depends_on:

      – grampsweb_redis

    volumes:

      – “/mnt/media1/gramps_users:/app/users”  # persist user database

      – “/mnt/media1/gramps_index:/app/indexdir”  # persist search index

      – “/mnt/media1/gramps_thumb_cache:/app/thumbnail_cache”  # persist thumbnails

      – “/mnt/media1/gramps_cache:/app/cache”  # persist export and report caches

      – “/mnt/media1/gramps_secret:/app/secret”  # persist flask secret

      – “/mnt/media1/gramps_db:/root/.gramps/grampsdb”  # persist Gramps database

      – “/mnt/media1/gramps_media:/app/media”  # persist media files

      – “/mnt/media1/gramps_tmp:/tmp”

  grampsweb_celery:

    <<: *grampsweb  # YAML merge key copying the entire grampsweb service config

    ports: []

    container_name: grampsweb_celery

    depends_on:

      – grampsweb_redis

    command: celery -A gramps_webapi.celery worker –loglevel=INFO

  grampsweb_redis:

    image: redis:7.2.4-alpine

    container_name: grampsweb_redis

    restart: always

volumes:

  gramps_users:

  gramps_index:

  gramps_thumb_cache:

  gramps_cache:

  gramps_secret:

  gramps_db:

  gramps_media:

  gramps_tmp:

2. Run sudo docker-compose up -d

Installing Nextcloud

Nextcloud is pretty much a personal Google Drive.

1. Make a nextcloud-app directory in your home directory.

2. For myself, I made two directories for my data at /mnt/media1/nextcloud-data/db and /mnt/media1/nextcloud-data/data

3. Make a docker-compose.yml file with this contents (add the passwords below and the mounts as needed) – note: I used the file given at https://github.com/nextcloud/docker?tab=readme-ov-file#base-version—apache to help with this.

  • Do not use any special characters in the database passwords. Change the highlighted portions as needed.

version: ‘2’

services:

  db:

    image: mariadb:10.6

    restart: always

    command: –transaction-isolation=READ-COMMITTED –log-bin=binlog –binlog-format=ROW

    volumes:

      – /mnt/media1/nextcloud-data/db:/var/lib/mysql

    environment:

      – MYSQL_ROOT_PASSWORD=passwordHere

      – MYSQL_PASSWORD=passwordHere

      – MYSQL_DATABASE=nextcloud

      – MYSQL_USER=nextcloud

  app:

    image: nextcloud

    restart: always

    ports:

      – 8080:80

    links:

      – db

    volumes:

      – /mnt/media1/nextcloud-data/data:/var/www/html

    environment:

      – MYSQL_PASSWORD=passwordHere

      – MYSQL_DATABASE=nextcloud

      – MYSQL_USER=nextcloud

      – MYSQL_HOST=db

      – PHP_UPLOAD_LIMIT=100G

Note: Other environment variables can be found at https://github.com/nextcloud/docker?tab=readme-ov-file#auto-configuration-via-environment-variables 

4. Run sudo docker compose up -d

5. Go to http://serverip:8080 and follow the instructions. You will need your database password on this step after making your account (it will think you are making a SQLite database.. You don’t want that)

6. You will then need to setup your Caddyfile to reverse proxy to this port and through something like Cloudfalre

7. When you have done the above you will get this error when connecting:

8. To fix it, edit the file it is telling you to edit to have something like this

8.What helped is changing:

'overwrite.cli.url' => '192.168.1.101:8080,

to

'overwritehost' => 'example.org', 'overwriteprotocol' => 'https',

9. You are done. Files are stored in a location such as /mnt/media1/nextcloud-data/data/data/menghinitravis/files

NextCloud With OnlyOffice

I largely followed the directions at https://helpcenter.onlyoffice.com/installation/docs-community-docker-compose.aspx

  1. Make a new directory called openoffice-app in your home directory.
  2. Run this command: git clone https://github.com/ONLYOFFICE/Docker-DocumentServer
  3. Make several changes to this file… (to be added later)
  4. When you run sudo docker compose up -d, it will take a while because it is downloading everything needed to run. It is very large.
  5. When it is running, go to your installation on the web. It will look something like this
  1. Run that command in your terminal and put that on your clipboard.
  2. Go to your circle at the top right of your Nextcloud and click it and click “Apps”
    1. Find the “OnlyOffice” app and install it.
  3. Go to your circle again and click “Administration settings”
  4. Go to “ONLYOFFICE” in your administration settings
  5. Fill out the data as needed. Your secret key is what you found in that command earlier.
  6. You are done! You can now make and edit documents!

Setting Up SMTP Email

  1. Go to https://myaccount.google.com/apppasswords and create an app password for Nextcloud.
  2. Go to the “Basic Settings” in your administration settings on Nextcloud
  3. Fill out the info with this information
  4. Try the email to see if it works.

Nextcloud Bookmarks

I installed the Nextcloud Bookmarks app through the admin interface in Nextcloud

Backing Up Large Files Using Online Accounts

Changing Chuck Size

Within Linux, you can use the “Online Accounts” to connect to Nextcloud. However, if you are using Cloudflair’s proxy, you cannot backup files that are larger than 100MB without changing the chuck size.

In order to do so you will have to. This will set it to 50 MB (half of Cloudflare’s 100 MB upload size limit) worked to resolve this issue.

Linux

Note: This fix might only be for using the official Nextcloud app.

Open a terminal window and edit the following file:

nano $HOME/.config/Nextcloud/nextcloud.cfg

Add the following line under the [General] section:

maxChunkSize=50000000

Save the file (Ctl+o, Ctl+x), then quit Nextcloud desktop, and start it again.

There are three choices:

1) upload large files directly in the browser 

2) the windows Nextcloud app will upload in smaller chucks

3) make a separate Nextcloud subdomain that doesn’t go through the Nextcloud proxy.

I choose to do number 3, assuming that Online accounts can’t do smaller chunk sizes.

My workaround was to make another dns record, without using the proxy. However, you will need to edit your Caddy file to still have a reverse proxy in addition to editing /nextcloud-data-folder/data/config/config.php

And adding one more domain

 ‘trusted_domains’ =>

  array (

    0 => ‘192.168.1.101:8080’,

    1 => ‘drive.menghini.org’,

    2 => ‘whateverLongNameYouWantSoNooneCanGuessIt.menghini.org’,

  ),

When signing into the Online Accounts, just use that long one as the name.

Transferring Nextcloud From One Server To Another

Nextcloud was the only container I had issues with transferring. All of the files were owned by the wrong user. In order to fix this, I had to change the owner of the files and folders from within the docker container.

  1. First find out the container ID by typing
    1. sudo docker ps
  2. Find the Nextcloud container ID and type this line:
    1. sudo docker exec -it nextcloud_containerID bash
  3. Now change all the files to match something like this:
    1. (In my case, the wrong files such as /apps was owned by root and not www-data)
    2. The command to do this was chown -R www-data:www-data config
    3. You will probably have to do this with many folders.
  4. I then got an error about apps updating. I had to run this command from outside the docker container:
    1. sudo docker exec -u 33 -it nextcloud_containerID php occ upgrade -v
  5. I then had an issue with everything being with an extra /index.php/ in the name
    1. I edited /config/config.php to include this line
      1. 'overwrite.cli.url' => 'https://drive.menghini.org',
    2. I then ran this command:
      1. sudo docker exec -u www-data -it nextcloud_containerID php /var/www/html/occ maintenance:update:htaccess
  6. As far as I am aware, this made everything work again.

Mealie

Mealie is a web server that keeps recipes.

1. Make a location where your files will be saved. Mine is /mnt/media1/mealie-data

2. Make a location where your docker-compose.yml file is. Mine is ~/mealie-app

3. This is the contents of the file (adjust the highlighted as needed):

services:

  mealie:

    image: ghcr.io/mealie-recipes/mealie:latest #

    container_name: mealie

    restart: always

    ports:

        – “9925:9000” #

    deploy:

      resources:

        limits:

          memory: 1000M #

    volumes:

      – /mnt/media1/mealie-data:/app/data/

    environment:

      # Set Backend ENV Variables Here

      ALLOW_SIGNUP: false

      PUID: 1000

      PGID: 1000

      TZ: America/Anchorage

      MAX_WORKERS: 1

      WEB_CONCURRENCY: 1

      BASE_URL: https://recipes.menghini.org

volumes:

  mealie-data:

Setting up SMTP Email

  1. Go to https://myaccount.google.com/apppasswords and create an app password for Mealie.
  2. Add the following info in your docker-compose file and change as needed

      BASE_URL: https://recipes.menghini.org ←already in there… insert the rest below

      SMTP_HOST: “smtp.gmail.com”

      SMTP_PORT: “587”

      SMTP_FROM_NAME: “[email protected]

      SMTP_FROM_EMAIL: “[email protected]

      SMTP_AUTH_STRATEGY: “TLS”

      SMTP_USER: “[email protected]

      SMTP_PASSWORD: “password here”

  1. Restart with sudo docker compose down && sudo docker compose up -d

Installing WordPress

WordPress is a software that allows you to have a website/blog.

1. Make a location where your files will be saved. Mine is /mnt/media1/wordpress-data

2. This is the contents of the docker-compose.yml file (adjust the highlighted as needed):

services:

  db:

    # We use a mariadb image which supports both amd64 & arm64 architecture

    image: mariadb:10.6.4-focal

    # If you really want to use MySQL, uncomment the following line

    #image: mysql:8.0.27

    command: ‘–default-authentication-plugin=mysql_native_password’

    volumes:

      – /mnt/media1/wordpress-data/db_data:/var/lib/mysql

    restart: always

    environment:

      – MYSQL_ROOT_PASSWORD=changeThis

      – MYSQL_DATABASE=wordpress

      – MYSQL_USER=changeThis

      – MYSQL_PASSWORD=changeThis

    expose:

      – 3306

      – 33060

  wordpress:

    image: wordpress:latest

    volumes:

      – /mnt/media1/wordpress-data/wp_data:/var/www/html

    ports:

      – 4578:80

    restart: always

    environment:

      – WORDPRESS_DB_HOST=db

      – WORDPRESS_DB_USER=whateverYouPutAbove

      – WORDPRESS_DB_PASSWORD=whateverYouPutAbove

      – WORDPRESS_DB_NAME=wordpress

volumes:

  db_data:

  wp_data:

Stopping All Docker Containers

If you wanted to stop all Docker containers without stopping each one individually, you can run this command:

sudo docker ps -q | sudo xargs docker stop

Backing Up To An External Hard Drive Using Rclone

You will want to backup your files to something such as an Exteernal Hard Drive. In order to do that, I did the following steps:

  1. Plug in your external hard drive
  2. Type lsblk -lf to find your new drive.
  3. If it is not ext4, you will have to do that in the steps below:
    1. To make it ext4, run (where X is the drive) sudo shred -v -n1 -z /dev/sdX
      1. This command only needs to start. It doesn’t have to do the entire disk. Replacing parts of it will render the entire thing useless.
    2. Now they are blank, if they are it would have no disklabels with the command below (it may have several blank partitions)
      1.  lsblk -lf
    3. Nor run sudo parted -l to see if there is any partition tables. I had “unkown”. In which, case you have to make one
      1. Type sudo parted /dev/sdx
      2. Type select /dev/sdx to make sure you are changing the disk you want
      3. Type mklabel gpt and confirm, if there are any options.
      4. Type q to quit
    4. Use gdisk to create a GPT partition on both disks. For example:
      1. sudo gdisk /dev/sdx
        1. Press n to create a new partition.
        2. Choose a partition number (usually 1).
        3. Set the size (press Enter to use the default).
        4. Choose a partition type (press Enter for the default).
        5. Press w to write the changes.
    5. Type sudo mkfs.ext4 /dev/sdx to make the entire disk ext4
    6. Type w to confirm and write change and exit with q
    7. You can verify that this was done with lsblk -lf
  4. [if you haven’t done so already…] Create a new directory for your new mount point. For example, sudo mkdir /media/externalHDD. Remember, /media is normally used for temporary device mounts.
  5. To mount the drive to the directory, use the syntax sudo mount -t ext4 /dev/sdx /media/externalHDD
  6. I then typed sudo chmod -R 777 /media/externalHDD/ so that all the users have permissions.
  7. Install Rclone, if you haven’t already:
    1. sudo apt install rclone
    2. I had to restart services when I did this, but it did not cause me any issues
  8. For this example, I am going to backup all of these directories (except for the lost+found directory):
  9. I recommend doing this in a tmux session, as this will take a long time. Just type: tmux.
    1. Tmux will run in a terminal, regardless if you leave the SSH or disconnect. You can also continue this tmux session on a different terminal or computer.
    2. Whenever you want to leave tmux you have two options:
      1. Ctrl+d
        1. Will detach you from the session. In order to get back to this session you will type tmux a
        2. If you have multiple tmux sessions (by typing in tmux), you can switch between them by hitting Ctrl+s, for switch.
      2. Type exit
        1. This will close the session
  10. I typed the following command in my tmux session:
    1. sudo rclone sync /mnt/media1/ /media/externalHDD/backupMedia1/ --exclude lost+found/** -P --log-file ~/rcloneLogFileMedia1
      1. The -P command shows progress
      2. I also am storing a log file in my home directory, in case there are errors. 
      3. This took me 11 hours and 38 minutes for my first backup with 1.094T using USB 2.0
  11. I also wanted to backup my home directory and started another tmux session and typed:
    1. sudo rclone sync ~/ /media/externalHDD/backupHome/ -P --log-file ~/rcloneLogFileHome --exclude '/snap/**' --exclude 'rcloneLogFileMedia1' --exclude 'rcloneLogFileHome'
  12. If you are running an Immich server, you will also have to backup the database manually by using this command from the Immich directory:
    1. sudo docker exec -t immich_postgres pg_dumpall -c -U postgres | gzip > "/path/to/backup/dump.sql.gz"
    2. In my case, I ran this command (and it took a few minutes):
      1. sudo docker exec -t immich_postgres pg_dumpall -c -U postgres | gzip > "/media/externalHDD/immich20240122.sql.gz"
    3. You will have to do this with any database software like SQL. Anything in a sqlite database doesn’t count.
  13. When you are done, unmount the disk by typing:
    1. umount /media/locationOfYourMount
    2. If you get a umount: /media/externalHDD: target is busy. Error, then you need to make sure to stop those processing from running.
      1. You can find out which processing are using it by typing sudo lsof /Path/to/target
      2. In my case, I had a tmux session and two different bash windows open to it. I had to close them
  14. To reverse these statements I ran these commands:
    • sudo rclone sync /media/externalHDD/backupMedia1/ /mnt/media1/ --exclude lost+found/** --exclude node_modules/** -P --log-file ~/rcloneLogFileMedia1_reverse
    • sudo rclone sync /media/externalHDD/backupHome/ ~/ -P --log-file ~/rcloneLogFileHome --exclude '/snap/**' --exclude 'rcloneLogFileMedia1' --exclude 'rcloneLogFileHome'

Using Tmux

I highly recommend using Tmux in the terminal to make multiple of them, and even continue to run commands while you are logged off.

Common shortcuts:

  • Running tmux will start a new session.
  • Ctrl + b + % splits the current pane vertically, creating a new pane on the right.
  • Ctrl + b + " (double quote) splits the current pane horizontally, creating a new pane on the bottom.
  • Ctrl + b + s switches sessions
  • Ctrl + b + d detaches from your session and saves it
  • Running tmux a will bring back the last session
Travis Avatar

More Articles & Posts