Intro
With the rise of AI and companies increasingly wanting to scan your pictures, videos, and files, I decided to look into self hosting my own Media Server.
A few things I wanted:
- A way to listen to music I have legally downloaded and keep my playlist and favorites intact, no matter what device I was using.
- A way to stream my legally obtained videos.
- A way to keep my photos in an organized and shareable library.
- My own Google Drive/Dropbox that I can hold my files and share with friends and family.
I was able to successfully achieve this by setting up my own Media Server with Ubuntu Server and Docker Containers. A scary setup for sure, but it was well worth the effort.
Prerequisite Knowledge In Order To Follow This Guide
Here are some prerequisites that I am going to assume you know:
- How To Use Bash (Linux Terminal)
- If you don’t know how to navigate using commands in a Linux terminal such as cd, ls, pwd, chmod, nano/vi(m) then a media server is probably not for you.
- Likewise, do you know what and where ~/ is?
- SSH
- Hand in hand with Bash, if you don’t know how to SSH, I don’t think you should make a media server.
- You can’t have the mind set of “I only want to use a GUI Interface”:
- You are literally going to setup a server that you won’t have a monitor, keyboard, or mouse plugged into after setup. Why would you be wasting resources on showing a GUI interface and managing updates for it?
- Knowledge and the ability to port forwarding
- You must know how to port forward. And you must be able to. Not having access directly to the router will not work.
Hardware I Have Used
If you are wanting to set up your own server, you will obviously need a computer. I first started using my desktop with Windows 10 that only hosted my music. As my needs were raised, I decided to look into having my own Ubuntu server so I decided to look into using an old PC that I used in college. Here are the stats of that old PC
My Original Server Setup From My Old College PC
I oringally used my old college PC from 2012 as my media server. For reasons stated in the next section, I decided to get a newer computer to use as my media server. But here were the specs:
- Case: NZXT Apollo Black SECC Steel Chassis ATX Mid Tower Computer Case
- Motherboard: ASUS P6T LGA 1366 Intel X58 ATX Intel Motherboard
- Power Supply: Antec TPQ-850 850W Continuous Power ATX12V / EPS12V SLI Certified CrossFire Ready 80 PLUS BRONZE Certified Modular Active PFC “compatible with Core i7/Core i5” Power Supply
- Graphics Card: BFG Tech GeForce GTX 295 1792MB GDDR3 PCI Express 2.0 x16 SLI Support Video Card BFGEGTX2951792E
- It used to have a second one of these cards in it, but I took it out in favor of the USB 3.0 PCI Express Expansion Card (below)
- The reason I got the USB 3.0 card was because it allowed for faster backups (from 9 hours and 55 minutes on USB 2.0 down to ~3 hours and 9 minutes for 1.1TB of data).
- It used to have two graphcis cards of the same type. But when only having one GPU, it dropped by wattage per hour down by about 120 Watts.
- It used to have a second one of these cards in it, but I took it out in favor of the USB 3.0 PCI Express Expansion Card (below)
- I purchased this to have faster backups: FebSmart 4 Ports Superspeed 5Gbps USB 3.0 PCI Express Expansion Card for Windows 11, 10, 8.x, 7, Vista, XP Desktop PCs, Built in Self-Powered Technology, No Need Additional Power Supply (FS-U4-Pro)
- CPU: Intel Core i7-920 – Core i7 Bloomfield Quad-Core 2.66 GHz LGA 1366 130W Processor – BX80601920
- RAM: A total of 12GB of OCZ Gold 6GB (3 x 2GB) DDR3 1600 (PC3 12800) Low Voltage Desktop Memory Model OCZ3G1600LV6GK
- Hard disk (3 total):
- An unknown HDD
- Where Ubuntu Server is installed
- It’s 1.5TB but has a 968G partition for Ubuntu Server.
- The other part of the disc is Windows 8 that was used in college, I didn’t want to remove it.
- Western Digital RE WD4000FYYZ 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5in Enterprise Internal Hard Drive – OEM w/3 Year Warranty (Renewed)
- Western Digital RE 4TB 7200RPM 64MB Cache SATA 6.0Gb/s 3.5″ Enterprise Internal Hard Drive (WD4000FYYZ) OEM
- An unknown HDD
- Backup External Disk
- With a sticky note that says: Menghini Server Backup #1
- WD 4TB Elements Portable HDD, External Hard Drive, USB 3.0 for PC & Mac, Plug and Play Ready – WDBU6Y0040BBK-WESN
- Will backup at the end of even months (February, April, etc)
- Kept off site when not in use.
- Although it says this is #1, this was bought a few days later
- With a sticky note that says: Menghini Server Backup #2
- WD 4TB Elements Portable HDD, External Hard Drive, USB 3.0 for PC & Mac, Plug and Play Ready – WDBU6Y0040BBK-WESN
- Will backup at the end of even months (February, April, etc)
- Kept off site when not in use.
- With a sticky note that says: Menghini Server Backup #1
Why I Decided To Make A New Server Build
After running my server for six months straight, I started to notice issues with my 12 year old hardware.
- The power consumption was enormous (and thus the cost). Using a Watt Meter, I estimated that it cost me $0.76 a day to run my original college server (it had two graphics cards at first), idling at 331 Watts. Once I removed one of those, it would idle at about 186 watts. Newer CPUs can idle around 5 Watts.
- Before:
- After:
- Before:
- The CPU couldn’t handle running all of my services. At the time I ended running my server, I was running Immich (Photos), Navidrome (Music), Jellyfin (Video), Nextcloud (File Storage), Gramps (Ancestry), Mealie (Recipe Tracker), Netdata (statistics), and Caddyfile (for my reverse proxies). More specifically, it couldn’t keep up with my photos and identifying people and objects.
- I was running out of storage space. I could have upgraded the HDDs, but I figured I would just upgrade the system at the same time.
My New Build
My new and current build’s hardware consists of:
- Base: R1 Intel N100 Mini PC 4C/4T, Support 40T(2 * 20T 2 Bay Storage ,NO RAM NO SSD 3-in-1 Mini Computers Windows 11 PRO WiFi 6 Dual 2.5G LAN ($199.00)
- Ram (base has max of 32GB): Crucial RAM 32GB DDR4 3200MHz CL22 (or 2933MHz or 2666MHz) Laptop Memory CT32G4SFD832A ($66.99)
- NVME: Samsung 970 EVO Plus SSD 1TB NVMe M.2 Internal Solid State Hard Drive, V-NAND Technology, Storage and Memory Expansion for Gaming, Graphics w/Heat Control, Max Speed, MZ-V7S1T0B/AM ($101.40)
- Harddrives: 2 of HGST Ultrastar HE10 10TB SATA 6.0Gb/s 7200 3.5″ Datacenter HDD – HUH721010ALE601 ($79.99 each)
- Currently, the same external HDDs that I used in my college PC, until I exceed 4 TB.
It idles at 25 watts, in comprasion to my 186 watts I was using on my college PC. It saves both the environment and me more money!
What Is A Docker Container
In order to understand how my Media Server runs, you need to know what a Docker Container is. A Docker Container is like a virtual machine, although it isn’t emulating an operating system. Rather, it keeps all of your software contained. If one of these containers is compromised by an unauthorized source, only that container is affected. It can’t access anything outside of that container.
For example, let’s say for my Navidrome container (Music Server), it only has access to my /mnt/media1/navidrome-data
folder, that’s it. In fact, it thinks that all of my data is at /data
, which isn’t even a directory on my server. In essence, it fools anything in the container to think that a folder is somewhere, even if it isn’t. And it can’t access anything outside of it. This is an oversimplified way to explain it, but it does the same things with port numbers. If you have two things running on port 80, you can tell it to expose the port on something else (because you can only run one thing on one port).
Another nice thing about Docker Containers is that they start up automatically when you reboot or turn on your computer.
Common Docker Compose Commands
Here are some common comands that are useful when using Docker Containers. They are all ran inside of the directory of the docker-compose.yml
files.
- Start a container and read it’s output:
sudo docker compose up
- Start a container detached (it will run on it’s own and start up next reboot):
sudo docker compose up -d
- Shut down a container
sudo docker compose down
- Restart a container
sudo docker compose down && sudo docker compose up -d
- Update a container
sudo docker compose pull && sudo docker compose up -d
- Delete a container
sudo docker compose down -v
- Run bash commands inside of a container
- Unlike the previous commands, this can be done from anywhere.
- 1) Get the id of your container by typing
sudo docker ps
- 2) Run
sudo docker exec -it <container_name_or_id> /bin/bash
Updating a Docker Container
- I highly recomend keeping your Docker Containers up to date.
- All updates and patch notes can typically be found at their Github such as Immich’s at https://github.com/immich-app/immich/releases.
- I personally subscribe to events through the Github repository main page for releases and security update alerts for anything I use.
- As mentioned previously, type
sudo docker compose pull && sudo docker compose up -d
- The command above works for all Docker Containers (as long as you used the #latest version in the Docker Compose file)
Software Used
I run a ton of things on my media server. Below you will find a list of all of the things I use, and what my Docker files look like and what I use it for.
Porkbun – Domain Name
I originally used Google Domain to purchase my domain, but they are no longer in operation (as all Google products end up being shutdown) so I instead recommend using PorkBun to purchase a domain name (such as menghini.org).
Installing Ubuntu Server
- Obtain Ubuntu Server at https://ubuntu.com/server
- Use a program such as Etcher to install it to a USB Drive.
- Install Ubuntu server (the first option) when asked (not minimized or third-party drivers)
- The nextscreen is network connections. Connect it to the internet
- Then it is proxy settings
- Next screen is an archive mirror.. I just clicked Done
- The next screen is a guided storage configuration. This is what hardisk you want it on. I choose the entire disk with the LVM group. This will allow you to use multiple hard disk in the future and it will act as one (two, one TB hard disk could be seen as 1 hard disk that is 2TB)
- The next screen asks about the storage configuration. I chose to only use 100G (the default). More info can be found at https://ubuntu.com/server/docs/install/storage
- When you are asked for a username, consider that it will be
username@serverName
. I don’t think your actual name matters. - I skipped Ubuntu Pro
- It will ask you if you want to install OpenSSH. You should do so.
- It will eventually ask if you want Docker amongst MANY options. You should install Docker. This will install Docker and DockerCompose. You will want this so that you can download things easier.
- Once you are done, you should be done and ready to reboot.
- As soon as you are logged in, you should be able to ssh into the computer using Putty.
- Type ip addr to get your ip address
- To shutdown, you need to type sudo shutdown now
- If you are using a laptop, you may not want the computer turn go to sleep or turn off when shut as you probably won’t be sitting on it. If so do this (credit to this)
- Edit the /etc/systemd/logind.conf file
- sudo vim /etc/systemd/logind.conf
- You want to uncomment the #HandleLidSwitch=suspend line and change it to HandleLidSwitch=ignore
- Also, uncomment the #LidSwitchIgnoreInhibited=yes line and change it to LidSwitchIgnoreInhibited=no
- Also, uncomment the #HandleLidSwitchDocked=suspend line and change it to HandleLidDockedSwitch=lock
- Run sudo systemctl restart systemd-logind.service
- Edit the /etc/systemd/logind.conf file
- You should be all set to go. I strongly recommend setting a static IP address on your router to the server.
- I would also recommend you take a look at your BIOS settings and set up your server to turn on automatically when it receives power.
Assessing the Server
Te remote into your server, you should consider the following programs:
- On a Windows Machine
- SSH – Putty
- Transfer Files – WinSCP
- On a Mac
- SSH – Regular Terminal
- Transfer Files – Cyberduck
- Linux
- SSH
- Transfer Files – SSH with your file browser
- To connect with another Linux server in Zorin go to your file manager and click “Other Locations”
- In connect to server type sftp://username@ipaddress/
- To connect with another Linux server in Zorin go to your file manager and click “Other Locations”
- Outside of Network
- I port forward port 22 to certain IP addresses (work, parents house). You should also have an A record without a proxy to do this such as ssh.menghini.org.
Updating Ubuntu Server
- Type to refresh the packages:
sudo apt update
- Type to upgrade:
sudo apt upgrade
- If there was a kernel update you can type uname -r to see your current kernel version before rebooting (to verify that it worked). Write down that number to verify that it worked.
- Reboot by typing:
sudo reboot
- If there was a kernel update you can type uname -r to see your current kernel version after rebooting (to verify that it worked). You should see a different number now.
Mount a Hard Disk
If you are wanting to install RAID 1, you should see that section: Setup Software RAID 1 Using mdadm
If you want to mount another hard disk. You will do it now:
- You will have to partition your harddisk to ext4 using gparted 1.4.06-amd64.iso (AND NOT NEWER) before you do this.
- Run
lsblk
to see what hard disc you have - You need a directory to serve as the mount point for the new disk. You can create this anywhere, but traditionally it’s done in
/mnt
or under/media
.- I want to to use
sda/dsa1
. You may read something about mounting in/mnt
vs/media
. - The general rule is
- use
/mnt
for stuff you mount by yourself - leave
/media
for the system to mount its stuff - This doesn’t really matter, but if you were using Ubuntu Desktop, some of this would be automatic when plugging in a flashdrive.
- use
- So we will use
/mnt
.
- I want to to use
- I mounted mine by using this command sudo mount
/dev/sda1 /mnt/media1
- You can verify that this worked with
df -h
orlsblk
- You can verify that this worked with
- Now you want this to mount every time you boot. You will need the UUID of the disk so type
blkid
to find that. - You will need to edit
/etc/fstab
using your favorite text editor such as vim using sudo. - Add the following line
UUID=whateverYourUUIDIs /mnt/media1 ext4 defaults 0 2
media1
is your mnt directory
- It will look like this
- Run
sudo reboot
to see if it works- If it doesn’t work (you will reboot into an emergency mode), like mine did it might be because you are getting this error:
/dev/disk/by-uuid/2b48a758-62b2-45a3-8b59-0be595c44512 has unsupported features: FEATURE_c12
- If that is the case your
e2fsprogs
program is out of date this… and in my version of ubuntu, I couldn’t update without compiling it myself - I had to download an older version of gparted in order to fix it
- If that is the case your
- If it doesn’t work (you will reboot into an emergency mode), like mine did it might be because you are getting this error:
- Now in order to use it you will need to type
sudo chmod -R 777 /mnt/media1.
Unless of course, you like to typesudo
a lot. - You can see what is currently mounted with
sudo parted -l
Setup Software RAID 1 Using mdadm
I used RAID 1 with two harddisk so that they are always mirrored. If one breaks, it is unlikely that they will both break at the same moment, so I have time to replace the other.
Install RAID Array
The steps below are from https://www.linuxbabe.com/linux-server/linux-software-raid-1-setup
- Install
mdadm
:sudo apt install mdadm
- Reboot your machine by typing
sudo reboot
- Type
sudo fdisk -l
to see all of your disc. They should have no Disklabel.- This one has a disklabel of gpt.. That’s a NO
- In that case, type (where X is the drive you are deleting)
- s
udo shred -v -n1 -z /dev/sdX
- This command only needs to start. It doesn’t have to do the entire disk. Replacing parts of it will render the entire thing useless, which is good enough.
- s
- This one has a disklabel of gpt.. That’s a NO
- Now they are blank, if they are it would have no disklabel type such as the picture below:
- Use
gdisk
to create a GPT partition on both disks. For example:sudo gdisk /dev/sdb
- Press n to create a new partition.
- Choose a partition number (usually 1).
- Set the size (press Enter to use the default).
- Choose a partition type (press Enter for the default).
- Press w to write the changes.
- Do the steps again for the 2nd (for me was /dev/sbc)
- Now, you can use mdadm to create the RAID 1 array with the two partitions you created:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
/dev/md0
is the name of your RAID device.- –level=1 specifies RAID 1 (mirror).
- –raid-devices=2 indicates that there are 2 devices in the array.
- You can check the status of your RAID array using:
cat /proc/mdstat
- You can now create a filesystem on your RAID array. For example, to create an ext4 filesystem:
sudo mkfs.ext4 /dev/md0
- You can now create a filesystem on your RAID array. For example, to create an ext4 filesystem:
sudo mkdir /mnt/myraid
sudo mount /dev/md0 /mnt/myraid
- To ensure the RAID array is mounted at boot, add an entry to your
/etc/fstab
file:echo '/dev/md0 /mnt/myraid ext4 defaults 0 0' | sudo tee -a /etc/fstab
- I rebooted my computer
- When I restarted my computer, it broke. Therefore I had to do the following:
- After it failed to mount the disc after a minute and a half, I had to go into emergency mode and run
mdadm --detail --scan
- The result was: “
Array /dev/md/menghiniserver:0 mdetadata=1.2 name=menghiniserver:0 UUID=...
“
- The result was: “
- I than ran:
sudo mdadm --detail /dev/md/menghiniserver:0
to get the status of my RAID. It worked. - I then ran sudo mount
/dev/md/menghiniserver:0 /mnt/media1
- I then edited my
/etc/fstab
file so it would boot correctly:/dev/md/menghiniserver:0 /mnt/media1 ext4 defaults 0 0
If that didn’t work…
- I was getting this error on reboots: a stop job is running for availability of block devices on reboot
- My name of my raid device kept changing so I decided to do this instead
- I ran blkid and found the UUID for the
/dev/md0
mount. - I then used
sudo vi /etc/fstab
and added the UUID instead
Monitor the RAID Array
Use mdadm
to monitor and manage the RAID array:
- To check the status:
sudo mdadm --detail /dev/md0
- To add or remove devices:
sudo mdadm --manage /dev/md0 --add /dev/sdd1
(for example) - To stop the array:
sudo mdadm --stop /dev/md0
You will also be able to montior this, with email updates through a software called Netdata covered in the next section.
Netdata (Monitoring Software)
I use Netdata to monitor my server remotely. It will email you and send you a push notification for any critical issues.
To install, I did this
- Type:
wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh
- You can see your server locally at http://ipaddress:19999/
- You can find your ip address by typing ip addr in your terminal.
- If you want to monitor the CPU temperature in Netdata, you can install that by typing:
sudo apt install lm-sensors
- Then type
sudo sensors-detect
- Then it should show up in Netdata
- To start Netdata, run
sudo systemctl start netdata
. - To stop Netdata, run
sudo systemctl stop netdata
. - To restart Netdata, run
sudo systemctl restart netdata
.
Cloudflare (Hiding Your IP Address)
I use Cloudflare’s free tier to hide my IP Address. Why do I do this? To avoid DDOS attacks. With Cloudflare, it will use a proxy so that it connects to Cloudflare first. If you are receiving too many requests, it will start to do a robot check with its users before connecting to you.
This is how I set it up:
- Go to https://www.cloudflare.com/ and make a free account.
- Type in your top level domain name such as menghini.org and then follow the prompts to connect your domain.
- When I set up mine, I had
ERR_TOO_MANY_REDIRECTS
when connecting to any of my services. In order to fix this, I had to go to the SSL/TLS overview settings and change it to “Full” from the default which was “Flexible”
SSH (or other ports) Using Cloudflare Or Uploading More Than 100MB
Cloudflare only allows these HTTP ports
- HTTP Ports: 80, 8080, 8880, 2052, 2082, 2086, 2095
- HTTPS Ports: 443, 2053, 2083, 2087, 2096, 8443
When using their proxy and only allows you to upload 100MB at a time with a single file. You cannot change this. This becomes an issue with software such as Immich (Photos) and Nextcloud (Storage) as I tend to upload videos that are larger than 100MB.
If you are wanting to SSH or do other port forwarding for things such as games, you will need to have a A record that will directly go to your IP without the proxy. There is no work around.
If you are going to allow SSH from outside of your network, I highly recommend that you only port forward it from approved IP addresses, otherwise, anyone can try to connect to you without any DDOS protection. Which means, someone can guess your password (you should use a passphrase).
Caddy (SSL Certificates, Reverse Proxy)
First and foremost, you will want to run Caddy. Although you can use Caddy for many different things, the two things I use it for are Reverse Proxy and automatic SSL Certificates, which both come in the same line. In order to not expose ports, Caddy will use a reverse proxy to change a port such as 2283 to be 80 (HTTP) and 443 (HTTPS) and run on something like photos.menghini.org, rather than having to type in an ip address (which are not allowed to have SSL Certificates) and exposing port numbers.
To install Caddy and with an example, here are the directions:
- Make a directory called caddy-app in your home directory
- In that directory make a file called docker-compose.yml with this contents.
version: “3.9”
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
– “80:80”
– “443:443”
– “443:443/udp”
volumes:
– ./Caddyfile:/etc/caddy/Caddyfile
– caddy_data:/data
– caddy_config:/config
volumes:
caddy_data:
caddy_config:
- The highlighted portion can be changed out to a different version, if desired.
- You now have to make a file called Caddyfile in the same directory as
docker-compose.yml
that has the following contents. Obviously, change the file as needed.
# Global options (if any)
# Define a block for photos.menghini.org
photos.menghini.org {
reverse_proxy 192.168.1.101:2283
}
# Define a block for music.menghini.org
music.menghini.org {
reverse_proxy 192.168.1.101:4533
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
- This will allow whatever is at port 2283 to be accessed at photos.menghini.org and likewise for my music.
- To note, 192.168.1.101 is the static ip address of my server on my internal network.
- Run
sudo docker compose up -d
- This command starts a Docker Container detached, so it will run on its own and you are free to close your terminal. For debugging purposes, I will sometimes run it without the -d to see any errors.
- If you have issues and need to restart it you should run sudo docker-compose restart caddy
- You can also turn it off by saying
sudo docker-compose down
in the directory - All docker containers are started and stopped in the directory where docker-compose.yml lives.
- At this point, it should start correctly. Note that you will need to port forward port 80 and 443 on your router to point to the ipaddress of the server.
Immich (Photo Storage)
Immich is flat out Google Photos. It is made to mimic every feature. As far as I am aware, every feature that Google Photos has, Immich has it too. The biggest strength is: Google isn’t scanning through your photos. I used to tell my students “No Google Employee is going to look through your pictures”. However, how big AI has become, I don’t trust it as much as I used to.
For the next few steps, I followed the guide at https://immich.app/docs/install/docker-compose, but it will be also written below.
- Create a directory of your choice (e.g. ./immich-app) to hold the docker-compose.yml and .env files. Mine is in my home directory.
mkdir ./immich-app
cd ./immich-app
- Download docker-compose.yml and example.env, either by running the following commands:
wget https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
- Get .env file
wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env
- Verify that the files are there with
ls -la
- Populate the
.env
file with custom value, if desirable using your favorite text editor (vim)- Populate custom database information if necessary.
- Populate
UPLOAD_LOCATION
with your preferred location for storing backup assets.- I picked
/mnt/media1/immich-library
- I picked
- Consider changing
DB_PASSWORD
to something randomly generated
- Start the containers (you may need sudo)
sudo docker compose up -d
Installing Jellyfin
- Make a directory in your home directory called
jellyfin-app
- Personally, I made a folder called jellyfin-data in
/mnt/media1
with the folders,/config
,/cache
,/media
,/media2
. I made these with sudo so root owned them - I also made
/media/movies
and/media/shows
- Make a file called docker-compose.yml with the following contents. Change it as needed
version: ‘3.5’
services:
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
#user: uid:gid
network_mode: ‘host’
volumes:
– /path/to/config:/config
– /path/to/cache:/cache
– /path/to/media:/media
– /path/to/media2:/media2:ro
restart: ‘unless-stopped’
# Optional – alternative address used for autodiscovery
environment:
– JELLYFIN_PublishedServerUrl=http://example.com
# Optional – may be necessary for docker healthcheck to pass if running in host network mode
extra_hosts:
– “host.docker.internal:host-gateway”
- I commented out the user one because I just use the default user (root).
- Run docker compose up -d
- If you have issues and need to restart it you should run sudo docker-compose restart jellyfin
Installing Navidrome
Navidrome is a server for music files. Basically, it’s your own Spotify.
I followed directions at https://www.navidrome.org/docs/installation/docker/, but they are also listed below.
- I created a directory in the home directory called /navidrome-app
- I created a docker-compose.yml
version: “3”
services:
navidrome:
image: deluan/navidrome:latest
user: 1000:1000 # should be owner of volumes
ports:
– “4533:4533”
restart: unless-stopped
environment:
# Optional: put your config options customization here. Examples:
ND_SCANSCHEDULE: 1h
ND_LOGLEVEL: info
ND_SESSIONTIMEOUT: 24h
ND_BASEURL: “”
volumes:
– “/path/to/data:/data”
– “/path/to/your/music/folder:/music:ro”
- Run
sudo docker compose up -d
- Navidrome requires that files have metadata piror to uploading them. You should use Mp3tag to make sure that any and all music you are uploading have a title, album name, and album artist, otherwise, it won’t combine music into albums.
- I personally use Sonixd on my PC. Unfortunately, it is no longer mainted in favor of Feishin (which has less features and is worse in every way). On my iOS device, I use Amperfy to listen to my music.
Installing Gramps
Gramps is a geology software.
1. Create a new file on the server named docker-compose.yml and insert the following contents: docker-compose.yml. Note: /mnt/media1/ is where you want the files to be stored.
version: “3.7”
services:
grampsweb: &grampsweb
image: ghcr.io/gramps-project/grampsweb:latest
restart: always
ports:
– “5000:5000” # host:docker
environment:
GRAMPSWEB_TREE: “Gramps Web” # will create a new tree if not exists
GRAMPSWEB_CELERY_CONFIG__broker_url: “redis://grampsweb_redis:6379/0”
GRAMPSWEB_CELERY_CONFIG__result_backend: “redis://grampsweb_redis:6379/0”
GRAMPSWEB_RATELIMIT_STORAGE_URI: redis://grampsweb_redis:6379/1
depends_on:
– grampsweb_redis
volumes:
– “/mnt/media1/gramps_users:/app/users” # persist user database
– “/mnt/media1/gramps_index:/app/indexdir” # persist search index
– “/mnt/media1/gramps_thumb_cache:/app/thumbnail_cache” # persist thumbnails
– “/mnt/media1/gramps_cache:/app/cache” # persist export and report caches
– “/mnt/media1/gramps_secret:/app/secret” # persist flask secret
– “/mnt/media1/gramps_db:/root/.gramps/grampsdb” # persist Gramps database
– “/mnt/media1/gramps_media:/app/media” # persist media files
– “/mnt/media1/gramps_tmp:/tmp”
grampsweb_celery:
<<: *grampsweb # YAML merge key copying the entire grampsweb service config
ports: []
container_name: grampsweb_celery
depends_on:
– grampsweb_redis
command: celery -A gramps_webapi.celery worker –loglevel=INFO
grampsweb_redis:
image: redis:7.2.4-alpine
container_name: grampsweb_redis
restart: always
volumes:
gramps_users:
gramps_index:
gramps_thumb_cache:
gramps_cache:
gramps_secret:
gramps_db:
gramps_media:
gramps_tmp:
2. Run sudo docker-compose up -d
Installing Nextcloud
Nextcloud is pretty much a personal Google Drive.
1. Make a nextcloud-app
directory in your home directory.
2. For myself, I made two directories for my data at /mnt/media1/nextcloud-data/db
and /mnt/media1/nextcloud-data/data
3. Make a docker-compose.yml
file with this contents (add the passwords below and the mounts as needed) – note: I used the file given at https://github.com/nextcloud/docker?tab=readme-ov-file#base-version—apache to help with this.
- Do not use any special characters in the database passwords. Change the highlighted portions as needed.
version: ‘2’
services:
db:
image: mariadb:10.6
restart: always
command: –transaction-isolation=READ-COMMITTED –log-bin=binlog –binlog-format=ROW
volumes:
– /mnt/media1/nextcloud-data/db:/var/lib/mysql
environment:
– MYSQL_ROOT_PASSWORD=passwordHere
– MYSQL_PASSWORD=passwordHere
– MYSQL_DATABASE=nextcloud
– MYSQL_USER=nextcloud
app:
image: nextcloud
restart: always
ports:
– 8080:80
links:
– db
volumes:
– /mnt/media1/nextcloud-data/data:/var/www/html
environment:
– MYSQL_PASSWORD=passwordHere
– MYSQL_DATABASE=nextcloud
– MYSQL_USER=nextcloud
– MYSQL_HOST=db
– PHP_UPLOAD_LIMIT=100G
Note: Other environment variables can be found at https://github.com/nextcloud/docker?tab=readme-ov-file#auto-configuration-via-environment-variables
4. Run sudo docker compose up -d
5. Go to http://serverip:8080
and follow the instructions. You will need your database password on this step after making your account (it will think you are making a SQLite database.. You don’t want that)
6. You will then need to setup your Caddyfile to reverse proxy to this port and through something like Cloudfalre
7. When you have done the above you will get this error when connecting:
8. To fix it, edit the file it is telling you to edit to have something like this
8.What helped is changing:
'overwrite.cli.url' => '192.168.1.101:8080,
to
'overwritehost' => 'example.org', 'overwriteprotocol' => 'https',
9. You are done. Files are stored in a location such as /mnt/media1/nextcloud-data/data/data/menghinitravis/files
NextCloud With OnlyOffice
I largely followed the directions at https://helpcenter.onlyoffice.com/installation/docs-community-docker-compose.aspx
- Make a new directory called
openoffice-app
in your home directory. - Run this command: git clone
https://github.com/ONLYOFFICE/Docker-DocumentServer
- Make several changes to this file… (to be added later)
- When you run
sudo docker compose up -d
, it will take a while because it is downloading everything needed to run. It is very large. - When it is running, go to your installation on the web. It will look something like this
- Run that command in your terminal and put that on your clipboard.
- Go to your circle at the top right of your Nextcloud and click it and click “Apps”
- Find the “OnlyOffice” app and install it.
- Go to your circle again and click “Administration settings”
- Go to “ONLYOFFICE” in your administration settings
- Fill out the data as needed. Your secret key is what you found in that command earlier.
- You are done! You can now make and edit documents!
Setting Up SMTP Email
- Go to https://myaccount.google.com/apppasswords and create an app password for Nextcloud.
- Go to the “Basic Settings” in your administration settings on Nextcloud
- Fill out the info with this information
- Try the email to see if it works.
Nextcloud Bookmarks
I installed the Nextcloud Bookmarks app through the admin interface in Nextcloud
- It uses https://floccus.org/ to sync to my web browser.
- There is no iOS Sync
Backing Up Large Files Using Online Accounts
Changing Chuck Size
Within Linux, you can use the “Online Accounts” to connect to Nextcloud. However, if you are using Cloudflair’s proxy, you cannot backup files that are larger than 100MB without changing the chuck size.
In order to do so you will have to. This will set it to 50 MB (half of Cloudflare’s 100 MB upload size limit) worked to resolve this issue.
Linux
Note: This fix might only be for using the official Nextcloud app.
Open a terminal window and edit the following file:
nano $HOME/.config/Nextcloud/nextcloud.cfg
Add the following line under the [General] section:
maxChunkSize=50000000
Save the file (Ctl+o, Ctl+x), then quit Nextcloud desktop, and start it again.
There are three choices:
1) upload large files directly in the browser
2) the windows Nextcloud app will upload in smaller chucks
3) make a separate Nextcloud subdomain that doesn’t go through the Nextcloud proxy.
I choose to do number 3, assuming that Online accounts can’t do smaller chunk sizes.
My workaround was to make another dns record, without using the proxy. However, you will need to edit your Caddy file to still have a reverse proxy in addition to editing /nextcloud-data-folder/data/config/config.php
And adding one more domain
‘trusted_domains’ =>
array (
0 => ‘192.168.1.101:8080’,
1 => ‘drive.menghini.org’,
2 => ‘whateverLongNameYouWantSoNooneCanGuessIt.menghini.org’,
),
When signing into the Online Accounts, just use that long one as the name.
Transferring Nextcloud From One Server To Another
Nextcloud was the only container I had issues with transferring. All of the files were owned by the wrong user. In order to fix this, I had to change the owner of the files and folders from within the docker container.
- First find out the container ID by typing
sudo docker ps
- Find the Nextcloud container ID and type this line:
sudo docker exec -it nextcloud_containerID bash
- Now change all the files to match something like this:
- (In my case, the wrong files such as
/apps
was owned by root and notwww-data
) - The command to do this was
chown -R www-data:www-data config
- You will probably have to do this with many folders.
- (In my case, the wrong files such as
- I then got an error about apps updating. I had to run this command from outside the docker container:
sudo docker exec -u 33 -it nextcloud_containerID php occ upgrade -v
- I then had an issue with everything being
with an extra /index.php/ in the name
- I edited
/config/config.php
to include this line'overwrite.cli.url' => 'https://drive.menghini.org',
- I then ran this command:
sudo docker exec -u www-data -it nextcloud_containerID php /var/www/html/occ maintenance:update:htaccess
- I edited
- As far as I am aware, this made everything work again.
Mealie
Mealie is a web server that keeps recipes.
1. Make a location where your files will be saved. Mine is /mnt/media1/mealie-data
2. Make a location where your docker-compose.yml
file is. Mine is ~/mealie-app
3. This is the contents of the file (adjust the highlighted as needed):
services:
mealie:
image: ghcr.io/mealie-recipes/mealie:latest #
container_name: mealie
restart: always
ports:
– “9925:9000” #
deploy:
resources:
limits:
memory: 1000M #
volumes:
– /mnt/media1/mealie-data:/app/data/
environment:
# Set Backend ENV Variables Here
ALLOW_SIGNUP: false
PUID: 1000
PGID: 1000
TZ: America/Anchorage
MAX_WORKERS: 1
WEB_CONCURRENCY: 1
BASE_URL: https://recipes.menghini.org
volumes:
mealie-data:
Setting up SMTP Email
- Go to https://myaccount.google.com/apppasswords and create an app password for Mealie.
- Add the following info in your docker-compose file and change as needed
BASE_URL: https://recipes.menghini.org ←already in there… insert the rest below
SMTP_HOST: “smtp.gmail.com”
SMTP_PORT: “587”
SMTP_FROM_NAME: “[email protected]”
SMTP_FROM_EMAIL: “[email protected]”
SMTP_AUTH_STRATEGY: “TLS”
SMTP_USER: “[email protected]”
SMTP_PASSWORD: “password here”
- Restart with
sudo docker compose down && sudo docker compose up -d
Installing WordPress
WordPress is a software that allows you to have a website/blog.
1. Make a location where your files will be saved. Mine is /mnt/media1/wordpress-data
2. This is the contents of the docker-compose.yml file (adjust the highlighted as needed):
services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.6.4-focal
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
command: ‘–default-authentication-plugin=mysql_native_password’
volumes:
– /mnt/media1/wordpress-data/db_data:/var/lib/mysql
restart: always
environment:
– MYSQL_ROOT_PASSWORD=changeThis
– MYSQL_DATABASE=wordpress
– MYSQL_USER=changeThis
– MYSQL_PASSWORD=changeThis
expose:
– 3306
– 33060
wordpress:
image: wordpress:latest
volumes:
– /mnt/media1/wordpress-data/wp_data:/var/www/html
ports:
– 4578:80
restart: always
environment:
– WORDPRESS_DB_HOST=db
– WORDPRESS_DB_USER=whateverYouPutAbove
– WORDPRESS_DB_PASSWORD=whateverYouPutAbove
– WORDPRESS_DB_NAME=wordpress
volumes:
db_data:
wp_data:
Stopping All Docker Containers
If you wanted to stop all Docker containers without stopping each one individually, you can run this command:
sudo docker ps -q | sudo xargs docker stop
Backing Up To An External Hard Drive Using Rclone
You will want to backup your files to something such as an Exteernal Hard Drive. In order to do that, I did the following steps:
- Plug in your external hard drive
- Type
lsblk -lf
to find your new drive. - If it is not
ext4
, you will have to do that in the steps below:- To make it ext4, run (where X is the drive)
sudo shred -v -n1 -z /dev/sdX
- This command only needs to start. It doesn’t have to do the entire disk. Replacing parts of it will render the entire thing useless.
- Now they are blank, if they are it would have no disklabels with the command below (it may have several blank partitions)
lsblk -lf
- Nor run
sudo parted -l
to see if there is any partition tables. I had “unkown”. In which, case you have to make one- Type
sudo parted /dev/sdx
- Type select
/dev/sdx
to make sure you are changing the disk you want - Type
mklabel gpt
and confirm, if there are any options. - Type
q
to quit
- Use gdisk to create a GPT partition on both disks. For example:
sudo gdisk /dev/sdx
- Press
n
to create a new partition. - Choose a partition number (usually 1).
- Set the size (press Enter to use the default).
- Choose a partition type (press Enter for the default).
- Press
w
to write the changes.
- Press
- Type
sudo mkfs.ext4 /dev/sdx
to make the entire diskext4
- Type w to confirm and write change and exit with q
- You can verify that this was done with
lsblk -lf
- To make it ext4, run (where X is the drive)
- [if you haven’t done so already…] Create a new directory for your new mount point. For example,
sudo mkdir /media/externalHDD
. Remember,/media
is normally used for temporary device mounts. - To mount the drive to the directory, use the syntax
sudo mount -t ext4 /dev/sdx /media/externalHDD
- I then typed
sudo chmod -R 777 /media/externalHDD/
so that all the users have permissions. - Install Rclone, if you haven’t already:
sudo apt install rclone
- I had to restart services when I did this, but it did not cause me any issues
- For this example, I am going to backup all of these directories (except for the lost+found directory):
- I recommend doing this in a
tmux
session, as this will take a long time. Just type:tmux
.- Tmux will run in a terminal, regardless if you leave the SSH or disconnect. You can also continue this tmux session on a different terminal or computer.
- Whenever you want to leave tmux you have two options:
Ctrl+d
- Will detach you from the session. In order to get back to this session you will type tmux a
- If you have multiple tmux sessions (by typing in tmux), you can switch between them by hitting
Ctrl+s
, for switch.
- Type
exit
- This will close the session
- I typed the following command in my tmux session:
sudo rclone sync /mnt/media1/ /media/externalHDD/backupMedia1/ --exclude lost+found/** -P --log-file ~/rcloneLogFileMedia1
- The
-P
command shows progress - I also am storing a log file in my home directory, in case there are errors.
- This took me 11 hours and 38 minutes for my first backup with 1.094T using USB 2.0
- The
- I also wanted to backup my home directory and started another tmux session and typed:
sudo rclone sync ~/ /media/externalHDD/backupHome/ -P --log-file ~/rcloneLogFileHome --exclude '/snap/**' --exclude 'rcloneLogFileMedia1' --exclude 'rcloneLogFileHome'
- If you are running an Immich server, you will also have to backup the database manually by using this command from the Immich directory:
sudo docker exec -t immich_postgres pg_dumpall -c -U postgres | gzip > "/path/to/backup/dump.sql.gz"
- In my case, I ran this command (and it took a few minutes):
sudo docker exec -t immich_postgres pg_dumpall -c -U postgres | gzip > "/media/externalHDD/immich20240122.sql.gz"
- You will have to do this with any database software like SQL. Anything in a sqlite database doesn’t count.
- When you are done, unmount the disk by typing:
umount /media/locationOfYourMount
- If you get a umount: /media/externalHDD: target is busy. Error, then you need to make sure to stop those processing from running.
- You can find out which processing are using it by typing
sudo lsof /Path/to/target
- In my case, I had a tmux session and two different bash windows open to it. I had to close them
- You can find out which processing are using it by typing
- To reverse these statements I ran these commands:
sudo rclone sync /media/externalHDD/backupMedia1/ /mnt/media1/ --exclude lost+found/** --exclude node_modules/** -P --log-file ~/rcloneLogFileMedia1_reverse
sudo rclone sync /media/externalHDD/backupHome/ ~/ -P --log-file ~/rcloneLogFileHome --exclude '/snap/**' --exclude 'rcloneLogFileMedia1' --exclude 'rcloneLogFileHome'
Using Tmux
I highly recommend using Tmux in the terminal to make multiple of them, and even continue to run commands while you are logged off.
Common shortcuts:
- Running tmux will start a new session.
Ctrl + b + %
splits the current pane vertically, creating a new pane on the right.Ctrl + b + "
(double quote) splits the current pane horizontally, creating a new pane on the bottom.Ctrl + b + s
switches sessionsCtrl + b + d
detaches from your session and saves it- Running
tmux a
will bring back the last session