In this post, we present how to deploy and configure NextCloud – the data-sharing platform – to store and secure files in an efficient way in terms of security and cost optimization.
Nowadays, data-sharing among team members is a must. While in the case of source code there is practically one single way – Git (or to be more specific two ways: github.com and gitlab.com), in the case of file-sharing in general, there are many different ways. Probably we all love the dropbox platform with its awesome usability/simplicity factor. However many of you are pointing two major drawbacks:
- After exceeding a certain point of data volume the DropBox services are getting noticeably more expensive.
- All the shared data are stored at the side of the DropBox service provider, so to be honest we don’t know who else gets into our data and what does it do with it.
There are many tips all around WWW regarding DropBox substitution and setting up some open source solutions on private VPS. But to be honest – until you try, you won’t realize how incomplete they are, and how challenging this problem is.
To resolve the problem stated above on our own, we gave a try to ownCloud platform. After several preliminary setting up, we acquired some experiences that let us make the final deployment. Meanwhile, we moved toward a sibling project NextCloud, but we have to emphasize that this decision was not driven by any painful experiences, but rather by our observations of great community support of NextCloud. Probably, using ownCloud you can be successful to the same extent as we are with NextCloud.
We decided to deploy NextCloud to the cheapest DigitalOcean droplet keeping hope that computing resources will be sufficient to support our team of several dozen people. To simplify the installation and maintenance of the whole configuration, we’ve used containerization technologies. There are plenty of docker-compose.yml templates on the Internet, but none of them appeared as an out of box solution for our needs, so we had to develop one on our own.
version: '3'
services:
redis:
image: redis:5-alpine
restart: always
db:
image: postgres:11-alpine
restart: always
volumes:
- db:/var/lib/postgresql/data
env_file:
- db.env
app:
image: nextcloud:fpm-alpine
restart: always
volumes:
- nextcloud:/var/www/html
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=nextcloud
- REDIS_HOST=redis
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
volumes:
- nextcloud:/var/www/html:ro
environment:
- VIRTUAL_HOST=x.y.com
- LETSENCRYPT_HOST=x.y.com
- LETSENCRYPT_EMAIL=x@y.com
depends_on:
- app
networks:
- proxy-tier
- default
proxy:
build: ./proxy
restart: always
ports:
- 80:80
- 443:443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- certs:/etc/nginx/certs:ro
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy-tier
letsencrypt-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- certs:/etc/nginx/certs
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
volumes:
db:
nextcloud:
certs:
vhost.d:
html:
networks:
proxy-tier:
Let us dive into details!
- Databases
1a. Redis
If you want to go with your NextCloud to production, never go without an in-memory database. Not using it will result in frequent errors of file locking (plenty of messages: server replied: <file> is locked
) at the clients and overall performance degradation of the server. We’ve used Redis and it appears to perform very well.
And yes, you may follow tips published on the Internet to switch file locking off using the option 'filelocking.enabled' => false
. Good luck to you, Suicide! 😉
1b. PostgreSQL
As a persistent data store, we choose PostgreSQL as we are excited about its directions and velocity of development. We claim that no much time left (if any) to catching up with enterprise-class competitors such as Oracle or MsSQL. Two details deserve additional attention. First, we configured data directory as external (volumes option), which means that regardless of the PostgreSQL container life cycle, the data remain persistent. Second, we enforced the major version of PostgreSQL (as 11), despite many docker-compose configurations found on WWW just skip it. Justification for this is that by upgrading containers (docker pull
command) containers with the highest versions are downloaded. While in the case of the NextCloud container the developers have foreseen automation of migration, then in the case of the PostgreSQL container you have to take care of it by yourself. An example of the way (although in my opinion not quite complete – but this is a topic for another post) can be found e.g. here. Thus, explicit specification of the major version will result in upgrades, but only within version 11, what according to PostgreSQL documentation ensures backward compatibility and does not require migration of data files. Otherwise, once after execution docker pull you might be surprised that the system does not start because the new major version of PostgreSQL was released.
2. HTTPS
To implement secure connection we have used containerization as well.
As you can see above, we decided to apply Let’s Encrypt solution because it is good and because it is cheap 😉 We used three sidecar containers that form the truly self-playing orchestra with the first violins as jrcs/letsencrypt-nginx-proxy-companion
! All you need is to specify VIRTUAL_HOST, LETSENCRYPT_HOST, and LETSENCRYPT_EMAIL. With this solution, the certificate periodic renewal is being done totally automagically.
So everything above will allow you to run NextCloud services in an efficient and secure manner. In the next part, we will show you how to configure NextCloud to use the resources of Simple Storage Services (a.k.a. S3).
12 replies on “Private file sharing using NextCloud with Wasabi S3 – a good dropbox replacement (Part1)”
When will be Part 2? 😉 I am curious about your experiences with Wasabi…
Indeed, Wasabi is quite a niche player when it comes to cloud services, but in my humble opinion, it fills the existing market gap well. As soon as we finish a few projects I will write a bit more about it. Probably in January.
Definitely will be happy to see the next part of this, I have been struglling in setting wasabi up as a primary object storage on my nextcloud. I was able to get S3 working but not Wasabi thus far.
Yeap, I remember my promise, so be patient and follow our blog. Probably before “Part 2”, we publish a post about Wasabi Cloud itself, as their documentation is quite limited and some functionalities should be explained before a deeper dive.
For now, here we have described some (not so obvious) concepts on which Wasabi Services is based:
https://blog.nubisoft.pl/wasabi-cloud-storage-the-quick-introduction-to-cheap-yet-powerful-s3-implementation/
Thanks! Such kind words drive us to share our knowledge. We invite you to the second part of this post, which we have just published:
https://blog.nubisoft.pl/nextcloud-with-wasabi-s3-part2/
[…] we are again. There was a lot of hype after we published the post about using Wasabi services as a data layer for the Nextcloud (or ownCloud) collaboration platform, […]
[…] After the technical introduction on how to deploy NextCloud using the containerization approach made in Part1, we explained basic functionalities of Wasabi S3, to show in this post, how to glue it all together […]
[…] release. This is native support of the NextCloud platform. We love it, and we use it extensively. Here you can read how to deploy to the cloud on your […]
[…] In this post, we show how to upgrade NextCloud implemented using the method described in the separate article published on our […]
Guys. Did you try traefik? We moved from nginx/letsencrypt-companion and have never looked back. The ACME integration is nice and simple.
Also, the docker integration: we can configure anything traefik related directly on docker-compose.yml (via labels).
Finally, Cloudflare. We actually use it for everything instead of letsencrypt. Its simpler, and give us more than just HTTPS.
We only use let’s encrypt directly (instead of Cloudflare) when the origin site may need to POST large payloads (e. g. GitLab, NextCloud) since Cloudflare has a limit of 100MB in these scenarios.
Of course, my 0.02 here. But I’m chiming in just because I would love to hear from your experience.
Finally: congratulations for your fantastic blog. I just found it but I’m learning precious things (from Zfs to software development) 🙂
Thanks for these tips. In fact, you are right about everything. We use Nginx because we need to know it well anyway, using Kubernetes ingress that is based on it. But honestly, if I had to improve this configuration, I would go a step further and reach for the Caddy. There, the integration with Let’s Encrypt is even more seamless.