Follow

docs.gitlab.com/ee/administrat

tl;dr:

"GitLab has memory leaks."

Their solution is monkey patching their application to check its memory usage after every 16th HTTP request and commit suicide if it's too much. This page is telling sysadmins who install their own GitLab instance how to configure this.

You know, I empathise with GitLab here. git.sr.ht had a memory leak once. My solution was pretty interesting: I wrote a shell script on a cronjob set to every 5 minutes, which checked the heap size of the git.sr.ht backend and, if it was within a certain percentage of system memory, it sent a SIGTERM along and

Wait, no, none of that is what happened. What happened is I fixed the memory leak.

I'm hitting the new git.sr.ht API backend 100x/sec with a non-trivial query and it's using 4M of RAM and 2% of one CPU core

OH MY GOD

There are MORE pages detailing how to kill memory hungry GitLab processes in MORE situations and with MORE approaches catered to your PARTICULAR OOM killing needs

docs.gitlab.com/ee/administrat

@sir they also had a similar problem with nazis but they somehow managed to come up with a worse solution

@sir I remember the days when PHP's per-request limit was 8MB and anything demanding higher limits was rightly considered a war crime

Even MediaWiki only needs 20MB

@sir I've been a rails developer since the beginning, and have delt with many memory leaks. It's pretty hard to leak memory in Ruby itself, the vast majority of the time it's from deep within C extensions, like imagemagik or libxml.

@sir I hope you did notice there was a section called 'Switch to Puma' right above the page's link in the navigation sidebar.

"As of GitLab 12.9, Puma has replaced Unicorn. as the default web server. [...]

Why switch to Puma?

Puma has a multi-thread architecture which uses less memory than a multi-process application server like Unicorn. [...]"

source : docs.gitlab.com/ee/administrat

@sir this solution is rather quite elegant and future proof

@sir We wanted to install Gitlab on our Kubernetes cluster at work. Gitlab provides official installation charts.

> In order to deploy GitLab on Kubernetes, the following are required:

> A Kubernetes cluster, version 1.12 or higher. 8vCPU and 30GB of RAM is recommended.

This was unthinkable when the backend for our main project uses 80MBs of RAM. Needless to say, we scraped the idea, and just installed Gitea and Jenkins (Not perfect, but better than throwing away 30GBs).

@sir @rozenglass I'm happy that Pagure still stays within those nice reasonable buckets. In the *high end*, deployments need at most 4GB of RAM, and most deployments get away with 1GB of RAM or less for *everything* (pagure, redis, pgsql). If it didn't, I don't think I could run it for my personal use on my tiny VM...

@sir
> sidekiq_memory_killer

Oh, reminds me that mastodon admin had/have to pull similar shitty tricks.
@sir
They took VC money. Maybe the VC is also invested in a DRAM producer?

@czero1 @sir do you have any evidence of the conspiracy (non-snark, genuine interest and use of the word).

I only ask because another project I knew wanted an integrated source forge, and at the time the only real option was GitLab and Phabricator.

@sir I'm quite surprised to have read something like this.

@sir the next top paying job will be to fix the memory leaks in all this crap software. Start learning valgrind.

@sir this has always been the problem with rails

@sir
Clever! By counting to exactly 16, they only need 4 bits to store the counter, saving some valuable memory!

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!