In this post, we present how to deploy and configure NextCloud – the data-sharing platform – to store and secure files in an efficient way in terms of security and cost optimization.
Nowadays, data-sharing among team members is a must. While in the case of source code there is practically one single way – Git (or to be more specific two ways: github.com and gitlab.com), in the case of file-sharing in general, there are many different ways. Probably we all love the dropbox platform with its awesome usability/simplicity factor. However many of you are pointing two major drawbacks:
- After exceeding a certain point of data volume the DropBox services are getting noticeably more expensive.
- All the shared data are stored at the side of the DropBox service provider, so to be honest we don’t know who else gets into our data and what does it do with it.
There are many tips all around WWW regarding DropBox substitution and setting up some open source solutions on private VPS. But to be honest – until you try, you won’t realize how incomplete they are, and how challenging this problem is.
To resolve the problem stated above on our own, we gave a try to ownCloud platform. After several preliminary setting up, we acquired some experiences that let us make the final deployment. Meanwhile, we moved toward a sibling project NextCloud, but we have to emphasize that this decision was not driven by any painful experiences, but rather by our observations of great community support of NextCloud. Probably, using ownCloud you can be successful to the same extent as we are with NextCloud.
We decided to deploy NextCloud to the cheapest DigitalOcean droplet keeping hope that computing resources will be sufficient to support our team of several dozen people. To simplify the installation and maintenance of the whole configuration, we’ve used containerization technologies. There are plenty of docker-compose.yml templates on the Internet, but none of them appeared as an out of box solution for our needs, so we had to develop one on our own.
version: '3' services: redis: image: redis:5-alpine restart: always db: image: postgres:11-alpine restart: always volumes: - db:/var/lib/postgresql/data env_file: - db.env app: image: nextcloud:fpm-alpine restart: always volumes: - nextcloud:/var/www/html environment: - POSTGRES_HOST=db - POSTGRES_DB=nextcloud - REDIS_HOST=redis env_file: - db.env depends_on: - db - redis web: build: ./web restart: always volumes: - nextcloud:/var/www/html:ro environment: - VIRTUAL_HOST=x.y.com - LETSENCRYPT_HOST=x.y.com - LETSENCRYPT_EMAILfirstname.lastname@example.org depends_on: - app networks: - proxy-tier - default proxy: build: ./proxy restart: always ports: - 80:80 - 443:443 labels: com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true" volumes: - certs:/etc/nginx/certs:ro - vhost.d:/etc/nginx/vhost.d - html:/usr/share/nginx/html - /var/run/docker.sock:/tmp/docker.sock:ro networks: - proxy-tier letsencrypt-companion: image: jrcs/letsencrypt-nginx-proxy-companion restart: always volumes: - certs:/etc/nginx/certs - vhost.d:/etc/nginx/vhost.d - html:/usr/share/nginx/html - /var/run/docker.sock:/var/run/docker.sock:ro networks: - proxy-tier depends_on: - proxy volumes: db: nextcloud: certs: vhost.d: html: networks: proxy-tier:
Let us dive into details!
If you want to go with your NextCloud to production, never go without an in-memory database. Not using it will result in frequent errors of file locking (plenty of messages:
server replied: <file> is locked) at the clients and overall performance degradation of the server. We’ve used Redis and it appears to perform very well.
And yes, you may follow tips published on the Internet to switch file locking off using the option
'filelocking.enabled' => false. Good luck to you, Suicide! 😉
As a persistent data store, we choose PostgreSQL as we are excited about its directions and velocity of development. We claim that no much time left (if any) to catching up with enterprise-class competitors such as Oracle or MsSQL. Two details deserve additional attention. First, we configured data directory as external (volumes option), which means that regardless of the PostgreSQL container life cycle, the data remain persistent. Second, we enforced the major version of PostgreSQL (as 11), despite many docker-compose configurations found on WWW just skip it. Justification for this is that by upgrading containers (
docker pull command) containers with the highest versions are downloaded. While in the case of the NextCloud container the developers have foreseen automation of migration, then in the case of the PostgreSQL container you have to take care of it by yourself. An example of the way (although in my opinion not quite complete – but this is a topic for another post) can be found e.g. here. Thus, explicit specification of the major version will result in upgrades, but only within version 11, what according to PostgreSQL documentation ensures backward compatibility and does not require migration of data files. Otherwise, once after execution docker pull you might be surprised that the system does not start because the new major version of PostgreSQL was released.
To implement secure connection we have used containerization as well.
As you can see above, we decided to apply Let’s Encrypt solution because it is good and because it is cheap 😉 We used three sidecar containers that form the truly self-playing orchestra with the first violins as
jrcs/letsencrypt-nginx-proxy-companion! All you need is to specify VIRTUAL_HOST, LETSENCRYPT_HOST, and LETSENCRYPT_EMAIL. With this solution, the certificate periodic renewal is being done totally automagically.
So everything above will allow you to run NextCloud services in an efficient and secure manner. In the next part, we will show you how to configure NextCloud to use the resources of Simple Storage Services (a.k.a. S3).