Home Server Architecture with Docker (part 3: docker containers)

For a recap, in the introduction, it was discussed how the system was going to be set up. Part 2 of this series discussed how to configure the nfs shares. Part 3 will show the containers I use and how to configure them.

There are many different containers available, so feel free to search Google for something like “openVPN docker container” and if you wanted to host an openVPN server you will find some you might be able to use directly. Since I stream my home movies/podcasts/music, have ubiquity AP, and host a database for my sensor network…I have the following containers:

  • Postgres
  • minidlna
  • ubiquiti unifi controller

One of the neat things about containers is that it doesn’t matter who runs the container as long as the data is the same it looks like nothing has changed. In other words, as long as we mount ALL of our data directories for the containers on our NFS share, which is hosted on the NAS. Any Docker server can mount the container, point to the shares and the only thing that would change is the IP address for any interfaces. (of course, you can set the IP addresses to the same as the docker server you are replacing, but this needs to be done in your router/DHCP server.)

I won’t go over the postgresSQL container since I have already talked about it in an earlier article. What I will mention is the other two containers…these are a bit more interesting/difficult. In the run command, you will notice the –net=host flag. This is NOT good to use. It is always best to let Docker handle the networking and not use the host’s networking stack. The reason these two containers have that flag is that they need to be able to send/receive multicast packets. Docker by default doesn’t forward them to Docker’s internal networks, which the containers reside. If we want to do this correctly we would create a custom docker bridged network, then on our Linux host enable multicast forwarding to that new custom network, and start our containers inside that custom network. Unfortunately, creating the forwarding for the multicast network is a bit over my head at the moment, so that is something I will work on in the future. Until then…I will use the –net=host flag.

I haven’t done any custom modifications to the Unifi container so it is started with:

docker run -d --net=host -e TZ='America/Detroit' -v /mnt/dockerSSDVolume/unifi/data:/var/lib/unifi -v /mnt/dockerSSDVolume/unifi/logs:/var/log/unifi --name unifi jacobalberty/unifi:unifi5

You probably don’t need to know this, but if you are ever interested in knowing what directories your container has modified you can use the docker diff command, which is described here.

For the MiniDLNA container, I created my own Dockerfile. The reason I created my own is that I wanted to use Alpine for my container’s OS. Alpine uses only 4MB of RAM, so it is a very good base to start with. We should always keep things clean so create a directory to store your dockerfiles in, and create the dockerfile for the minidlna container…

sudo mkdir -p /opt/dockerBuilds/minidlna/

sudo nano /opt/dockerBuilds/minidlna/Dockerfile

In that dockerfile copy and paste the below script… (I should put this on github one day)

FROM gliderlabs/alpine:latest
RUN apk --no-cache --update add minidlna 
ADD ./minidlna.conf /etc/minidlna.conf

VOLUME ["/var/run/minidlna", "/var/cache/minidlna", "/var/log"]

EXPOSE 1900/udp

ENTRYPOINT [ "/usr/sbin/minidlnad", "-S" ]

There are a couple things to notice…first is that this container is running as root…NOT a good idea. We should really change to a non-root user. This is something I will have to play with in the future. When I first was setting this up I was having issues with permissions on the NFS share. To run as a non-root user, we would have to change the user minidlna runs as in the config file, make sure we can read files on the NFS share, and in the docker script, we will have to specify the user to change to before running the minidlnad command. The other is that I have defined volumes. These volumes are stored locally and not on the NAS box. Being only a media streaming server it doesn’t matter if we have to rebuild the database, but you can always override these volumes in the docker run command.

For the minidlna.conf file I have the following.

# port for HTTP (descriptions, SOAP, media transfer) traffic

# network interfaces to serve, comma delimited

# specify the user account name or uid to run as

# set this to the directory you want scanned.
# * if you want multiple directories, you can have multiple media_dir= lines
# * if you want to restrict a media_dir to specific content types, you
# can prepend the types, followed by a comma, to the directory:
# + "A" for audio (eg. media_dir=A,/home/jmaggard/Music)
# + "V" for video (eg. media_dir=V,/home/jmaggard/Videos)
# + "P" for images (eg. media_dir=P,/home/jmaggard/Pictures)
# + "PV" for pictures and video (eg. media_dir=PV,/home/jmaggard/digital_camera)

# set this to merge all media_dir base contents into the root container
# note: the default is no

# set this if you want to customize the name that shows up on your clients
friendly_name=Alpine DLNA Server

# set this if you would like to specify the directory where you want MiniDLNA to store its database and album art cache

# set this if you would like to specify the directory where you want MiniDLNA to store its log file

# set this to change the verbosity of the information that is logged
# each section can use a different level: off, fatal, error, warn, info, or debug

# this should be a list of file names to check for when searching for album art
# note: names should be delimited with a forward slash ("/")

# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes

# set this to yes to enable support for streaming .jpg and .mp3 files to a TiVo supporting HMO

# set this to beacon to use legacy broadcast discovery method
# defauts to bonjour if avahi is available

# set this to strictly adhere to DLNA standards.
# * This will allow server-side downscaling of very large JPEG images,
# which may hurt JPEG serving performance on (at least) Sony DLNA products.

# default presentation url is http address on port 80

# notify interval in seconds. default is 895 seconds.

# serial and model number the daemon will report to clients
# in its XML description

# specify the path to the MiniSSDPd socket

# use different container as root of the tree
# possible values:
# + "." - use standard container (this is the default)
# + "B" - "Browse Directory"
# + "M" - "Music"
# + "V" - "Video"
# + "P" - "Pictures"
# + Or, you can specify the ObjectID of your desired root container (eg. 1$F for Music/Playlists)
# if you specify "B" and client device is audio-only then "Music/Folders" will be used as root

# always force SortCriteria to this value, regardless of the SortCriteria passed by the client

# maximum number of simultaneous connections
# note: many clients open several simultaneous connections while streaming

# set this to yes to allow symlinks that point outside user-defined media_dirs.

There are a few changes I would like to make other than the non-root user. I would like the videos and other media directories to be read only. By default minidlna creates watchers to see when these files change it will update the database. I was having a terrible time getting these files to show up on my media players without having write access. It would be nice to have the media shares as read-only, configure minidlna with “inotify=no” instead of “inotify=yes” this will disable the file watchers, and create a script on the docker host that uses docker exec command to tell minidlna to look for newly added files. Since I have not tried that process I do not know if it will solve the problem of writing to the NFS shares. Anyways…to actually build and run the minidlna container I used the following commands:


docker build -t aplineminidlna .

docker run -d --name minidlna --net=host -v /mnt/FamilyMusic:/opt/Music -v /mnt/FamilyMovies:/opt/Videos -v /mnt/FamilyPictures:/opt/Pictures -m 1024M alpineminidlna

I specified a limit on the RAM for the minidlna container because when I didn’t…it was starting to eat up all the RAM I had on my little beebox docker server. I was afraid it might crash my postgres container when it maxed out my ram, so I stopped the minidlna container and added the memory limit with -m parameter. Some people only I have found use minidlna on a raspberry pi, so it should be able to run with 1 GB of RAM.


One thing I forgot to mention is that if you have issues accessing the ports on your containers you may need to open the port on your docker server. Here are some links I found helpful..


this one here also talks about why –net=host isn’t good, issues, and how to fix.

Docker Networking 101 – Host mode


In case you are interested with these few containers here is what the resource utilization is while everything is nearly idle…


Starting at the topmost is the Unifi Controller, MiniDLNA, a test postgres server, and the sensor net postgresSQL database. As you can see 4GB of RAM isn’t quite enough when things get moving. It would be safer to have 8GB of RAM, which also allows for more containers, but if you are in a bind 4GB will work.


–update 7-4-2017:

After playing around with the containers and NFS…permissions has become a problem. I recommend dropping NFS and stick with SAMBA. This can be enabled on FREE NAS then mount as CIFS shares instead of NFS.

–update 10-23-2017:

After using CIFS for a few months to stream movies, a new problem arose. The Roku device would stop playing 1080P quality movies after 5 to 10 minutes.  Searching the Internet didn’t bring many results on this issue. One thought was that the bandwidth isn’t enough to keep streaming, so some software applications would stop playing if the bandwidth dropped. Unfortunately, I can’t think of a way to test this without more control in the DLNA client. Since I knew I was able to stream this HD content with NFS, I have reverted back to NFS. The only issue is that I do want to use CIFS to mount the share on clients, so I would be able to upload new movies to the collection.

My current solution is sloppy, but I it seems to work. I had created a new FREENAS dataset for the movies, which miniDLNA will be using, and left the other Movies CIFS share. In other words there are now two movie folders one CIFS and the other for NFS both contain copies of the same films. The CIFS share’s films can be copied to NFS in a script every hour using RSYNC or a normal CP command.

A better solution is to drop the CIFS share requirement completely. To upload films, one could use WinSCP to upload the movies to the share.

Another solution would be to keep the CIFS share, but delete the movies after they have been moved to the NFS location. This way windows machines could upload files without using a third party application.


Other Sources:






random issues
NFS uid mappings/permissions


minidlna config rpi

docker multicast


Leave a Reply

Your email address will not be published. Required fields are marked *