Whatever the repo is setup with.
Whatever the repo is setup with.
Much more important than the enjoyable culture is the material aspect - how much work each developer has to do. Nice vibes help delay burnout but rarely eliminate it. Or they let it happen with a smile on the face.
Pay the developers instead, so they can reduce hours worked elsewhere, if you can. Or contribute code, if you can. This isn’t aimed at you personally, but anyone reading. I can’t contribute code but I can pay so I do that.
Nice. So this model is perfectly usable by lower end x86 machines.
I discovered that the Android app shows results a bit slower than the web. The request doesn’t reach Immich during the majority of the wait. I’m not sure why. When searching from the web app, the request is received by Immich immediately.
You could absolutely do that and be fucked too. However the point of the pattern I suggested isn’t to replace the return with an assignment. That is, the point isn’t to do the exact same implementation and then do result = something
before returning it. Instead it’s to use the initialized result
var to directly store your result throughout the function, at every place where you manipulate it. So in this case my suggestion is to not have psize
at all. Instead start with int result = -1;
and return result;
and do all the things you do to psize
except on result
. Then there’s a higher chance you will return the right value. Not a guarantee. I’m not at all implying that “if they only did this one thing, they wouldn’t have fucked up like this, so stupid” I’m merely suggesting a style that can decrease the probability of this type of error, in my experience. I’m teaching my team to write in defensive ways so they can feel some confidence in what they wrote, even if they slept 2 hours the night before, and also understand it after another bad night. Cause that ends up happening, life happens and like OpenZFS we also can’t afford serious bugs in what we do.
Scary indeed.
This one could be helped by always using this pattern whenever you write a function that returns a value, in any language, along with no early returns:
int func(...) {
int result = -1;
...
return result;
}
I always start with writing my result default value, ideally indicating failure, and the return line. Then I implement the rest. We often don’t have the luxury of choosing the language we work with that has the features we like, but consistently enforced code style can help with a lot of problems. Anyone can make mistakes like the one in this bug regardless of experience so every little bit helps.
“They shouldn’t be doing it,” Mr Rogers says. “A larger wealthier property owner does not have more property rights than a smaller, less wealthy property owner.”
But seriously, aren’t heat pumps usable for cooling data centers, apart from being more expensive?
A significant decrease in the amount of surplus value society produces going towards tech companies producing proprietary software, whicj is most of them. Basically the costs of using software for a whole lotta things are gonna get lower. This would make that society’s products cheaper for itself and export. It would allow its labour to do more useful things, one of which could be new FOSS software. But also helping out with the green transition, taking care of the ageing population, education, etc.
All-in, I wanted something on the order of 1MB for client app, server, all dependencies, everything.
Okay that’s gotta be radically different!
Well, you gotta start it somehow. You could rely on compose’es built-in service management which will restart containers upon system reboot if they were started with -d
, and have the right restart policy. But you still have to start those at least once. How’d you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d
. But then I’m splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down
. Not great. Instead I’d write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that’s kinda what I’m doing isn’t it? Except if I start it with docker compose up
without -d
, I don’t need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald
too, and I can use systemd’s restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It’s way more powerful than compose’s restart policy. Finally, I like to clean up any data I haven’t explicitly intended to persist across service restarts so that I don’t end up in a situation where I’m debugging an issue that manifests itself because of some persisted piece of data I’m completely unaware of.
Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.
Why start anew instead of forking or contributing to Jellyfin?
I think I lost neurons reading this. Other commenters in this thread had the resilience to explain what the problems with it are.
The problem is that Grok has been put in a position of authority on information. It’s expected to produce accurate information, not spit out what you ask it for, regardless of the factuality of information. So the expectation created for it by its owners is not the same as that for Google. You can’t expect most people to understand what LLM does because it doesn’t scale. The general public uses uses Twitter and most people get the information about the products they’re being sold and use by their manufacturer. So the issue here is with the manufacturer and their marketing.
I use a fixed tag. 😂 It’s more a simple way to update. Change the tag in SaltStack, apply config, service is restarted, new tag is pulled. If the tag doesn’t change, the pull is a noop.
Let me know how inference goes. I might recommend that to a friend with a similar CPU.
Yup. Everything is in one place and there’s no hardcoded paths outside of the work dir making it trivial to move across storage or even machines.
Because I clean everything up that’s not explicitly on disk on restart:
[Unit]
Description=Immich in Docker
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
WorkingDirectory=/opt/immich-docker
ExecStartPre=-/usr/bin/docker compose kill --remove-orphans
ExecStartPre=-/usr/bin/docker compose down --remove-orphans
ExecStartPre=-/usr/bin/docker compose rm -f -s -v
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target
Did you run the Smart Search job?
That’s a Celeron right? I’d try a better AI model. Check this page for the list. You could try the heaviest one. It’ll take a long time to process your library but inference is faster. I don’t know how much faster it is. Maybe it would be fast enough to be usable. If not usable, choose a lighter model. There’s execution times in the table that I assume tell us how heavy the models are. Once you change a model, you have to let it rescan the library.
If the cost of panels drops significantly, there would be more capital available to spend on inverters, even if they stay at the current prices, still decreasing the cost of deployment. But yes. 😄