@sir I opened that page to be presented with a missing graph because apparently Privacy Badger decided to block l.sr.ht. Any idea why?
@YaLTeR it's probably being overzealous by default. l.sr.ht is my domain, I can assure you it's legitimate
@sir The fact that this is hard to determine is precisely why I'm skeptical about static linking as a sysadmin:
>It is also unknown if any of these vulnerabilities would have been introduced after the last build date for a given statically linked binary
@jfred my point is that even without the answer to this question, the impact is small, but if this question were answered, the impact could be virtually non-existent (and the question is answerable).
@sir Sure, but:
>Not including libc, the only libraries which had "critical" or "high" severity vulnerabilities in 2019 which affected over 100 binaries on my system were dbus, gnutls, cairo, libssh2, and curl.
If everything was statically linked, how sure would you be that you've actually updated those over 100 binaries? Yes, there are only a few libraries that are widely used, but it sounds like a royal pain to track all of them down when there is a major vuln.
@jfred because they're in the dependency list for the package.
@indirection @jfred static linking comes with a minor increase in binary size, which may be offset by not having to distribute the unused portion of the shared library, and a minor increase in update size, which may be offset by differential updates, and a large increase in rebuild times when a vulnerability appears, which may be offset by the usual delay between vulnerability discovery and announcement, during which distros are usually clued in and given time to prepare a mitigation.
@sir , I guess it's just a matter of time until the tooling is there, that is smart enough to do partial binary updates and only link necessary code.
Right now I know statically linked Rust binaries are huge. I'm not sure about other compiled languages. I'm assuming C/C++ linkers are smart today since they're "leading the way" in a sense.
@sir Okay, that's fair. There's lots of good distro tooling right now around dynamically linked packages and not a whole lot around statically linked ones. I will concede that, if the tooling were as good, static linking could be as good.
I use Guix right now for example; when a library's package definition gets updated, everything that depends on it is automatically recompiled if there's not already a binary package built on CI. It's hard to beat that peace of mind.
@jfred hahaha if you're already recompiling everything which depends on your shared library when it updates then what's the fucking point!
@sir Well, in the case of security patches in particular you/the maintainers would typically use grafts instead to be quicker about it: https://guix.gnu.org/manual/en/html_node/Security-Updates.html
@jfred I see. Similar tooling could be made for static linking. Faster rebuilds could be accomplished by keeping the artifacts around, and faster distribution could be accomplished with binary patches. More sophisticated analysis could also be performed, and with cachable intermediates, to narrow down the list of affected programs. This would also be useful for shared objects, it could tell you what programs to reboot after a security update.
@sir Yeah - I'd love to see that tooling around static linking. I think my current preference for dynamic linking has more to do with the tooling that already exists around it than with static vs. dynamic linking itself.
@sir perf stat -r 1000 /bin/true
Will give even more statistics about the CPU load. There is also a time summary at the end with the variation.
@xerz absolutely fucking wrong, Electron ships an entire browser worth of features when you use maybe 40 of them.
@xerz there's no linker that stupid, give me a break, it's not the same thing
@xerz do you even realize that the domain you're talking about is completely unrelated to the problems I'm addressing here
@xerz people are idiots no matter what. But I've argued many times before that the only valid way to install software is through your package manager, where the experts have taken care to make sure it is built correctly for your system. It is to these experts that I am talking - not the morons who are distributing broken shit themselves.
@xerz "yes, this is better, but society is complicated therefore it doesn't matter"
@xerz at no point did I ever say "use it now or gtfo"
@sir related to this discussion:
The part on symbol versioning is also relevant:
@sir FFmpeg library users would love that.
Even with liberal ~3 years between ABI breaks, users still never bother to upgrade their code to use the new APIs.
If they could they'd just pin the version and never bother to upgrade ever.
But thanks to dynamic linking and distributions rejecting the windows dllhell, its upgrade or die - if their package stops working or compiling and there's no response, distributions will just drop it. Its happening now due to lavr.
So if anything it helps progress.
@christianbundy we should start by loosening the requirements for dynamic linking in distros and gradually moving packages to static builds, starting where the impact is highest, and building better tools for the distribution of statically linked binaries
@sir I think 4.6% of the symbols used in average does not mean the linker will remove the rest. It only happens if .a consists of enough independent object files.
It is even possible that 1 symbol uses all the rest of the code.
@sir How about LGPL licensed libraries?
@BartG95 LGPL permits static linking without being viral; and contrary to popular belief, the GPL requires virality even with dynamic linking
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!