• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle
  • It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.

    Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.

    While we can agree that SSH is a very safe service, if you allow password login for root and the password is “root” the first scanner that passes will get control of your server.

    As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.



  • The really important things (essentially only photos) are backed up on a different USB drive and remotely on backblaze. Around one terabyte cost 2-3$ per month (you pay by operation, so it depends also by how frequently you trigger the backup). You want to search for “cold storage” which is the name for cloud storage unfrequently accessed (in other words, more storage than bandwidth). As a bonus, if you use rclone you can encrypt your data before sending it to the cloud.





  • Just use the directory listing of your favourite web server. You have a HTTP read only view of a directory and all of its content. If you self host likely you have already a reverse proxy, so it is just matter of updating its configuration. I’m sure it is supported by Apache, Nginx, LightHttpd, and Caddy. But I would expect every webserver supports it. Caddy is the easiest to use if you need to start from scratch.



  • I’d say that the most important takeover of this approach is to stop all the containers before the backup. Some applications (like databases) are extremely sensitive to data corruption. If you simply ´cp´ while they are running you may copy files of the same program at different point in time and get a corrupted backup. It is also important mentioning that a backup is good only if you verify that you can restore it. There are so many issues you can discover the first time you recover a backup, you want to be sure you discover them when you still have the original data.



  • The decryption key is more than 20 random character, so if you get only half of it is not a biggie and it doesn’t look like anything interesting.

    It is on the internet mostly because I don’t have anything else to host it locally. But I see some benefit: I wanted for the server to be available immediately after a power failure. If it fetches the key from internet I just need for the router to be online, if it fetches it from the local network I need another server running unencrypted disk.



  • Second reason. It may run your vpn, with the server down you cannot connect to it and provide the decryption key unless you are connected to the same network.

    There are some good answer around where the server can easily decrypt automatically as long as it is connected in your home but will likely fail at a thief’s home. These are a much safer setup than keeping data unencrypted even if they are not bullet proof.



  • I’ve configured something similar. The /boot partition is the only unencrypted. In the initramfs there is a script that downloads half of the decryption key from http, while the other half is stored in the script itself. The script implements automated retry until it can fetch the key and decrypt the root partition.

    My attack model here is that, as soon as I realize someone stole my NAS I can shutdown the server hosting half of the decryption key making my data safe. There is a window where the attacker could connect it to a network and decrypt the data, but it is made more difficult by the static network configuration: they should have a default gateway with the same IP address of mine.

    On my TODO list I also have to implement some sort of notification to get an alert when the decryption key is fetched from internet.


  • They also says that installing a different os will invalidate the warranty. But their x86 models (I wasn’t aware of the arm) literally ship with a USB drive connected to an internal USB port which starts the setup of their custom Linux if it detects no OS on the internal drives. You just swap that pendrive and you install whatever you want. I cannot say it works for all the models, but I did a little research before buying mine and I can say it run debian for more that one year without any compatibility issue.





  • lorentz@feddit.ittoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.