Thats not the feature i would port to paperless.paperless needs an o counter lol.
Thats not the feature i would port to paperless.paperless needs an o counter lol.
In firefox on android i just flip the switch to request desktop site amd its mostly fine…
Have you checked out vikunja? (https://vikunja.io/) It was a pretty easy replacement of ticktick for my family.
yah, my house is wired with copper and 10 gig copper uses a lot of power. It doesn’t really help that the new slightly less power hungry 48 port 10 gig switches are thousands of dollars. I’m using 100 to 150ish watts per 10 gig switch to be able to buy the switch for under 500 bucks instead of using 60-100 watts and paying 2-5k per switch…
dell powerconnect 8164’s and arista 7050tx’s . House is wired with copper so 10 gig copper is what I have to use and that’s power hungry.
50 watts is maybe halfof one of my 10 gig switches…
More like he buys powerball ticket in his country and numbers win equivalent prize in lucky guys country
I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product
Just came here to say this, it workson a 10 dollar a year racknerd vps for me no problem. Matrix chugs on my much bigger vps, although it is sharing that with a bunch of other things, overall it should have mich more resources.
Those are puny mortal numbers… my backup nas is more than twice that…
I use rss-bridge for the popular stuff but I’ve found rss-funnel to be nicer for creating my own scrapes (mostly taking rss feeds that link to the website instead of the article and adding a link to the article mentioned on the website (https://github.com/shouya/rss-funnel)
Pretty sure that title is firmly held by mcafe, even now.
Pretty much this. I don’t even bother with watchtower anymore. I just run this script from cron pointed at the directory I keep my directories of active docker containers and their compose files:
#/bin/sh for d in /home/USERNAME/stacks/*/ do (cd “$d” && docker compose pull && docker compose up -d --force-recreate) done; for e in /home/USERNAME/dockge/ do (cd “$e” && docker compose pull && docker compose up -d --force-recreate) done;
docker image prune -a -f
Does yours have 8 sata ports or dual external sf8088 ports per chance and moreram?
Nevee saw that on wireguard once i foind the better connections for my location, weird
Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.
Portability and backup are dead simple.
you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.
`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”
cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `
dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .
jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).
Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.