Follow

I was talking about this last night throughout my replies but I reckon it ought to be posted outright:

drewdevault.com/dynlib.html

dynamic linking bad

static linking good

@sir I think the old arguments for dynamic linking are really irrelevant these days.

@feld @sir Looking back, and having direct experience with dynamically-loaded yet statically-linked libraries on a certain "toy" OS, I openly question if *any* of the arguments were valid at any time what-so-ever.

@sir I opened that page to be presented with a missing graph because apparently Privacy Badger decided to block l.sr.ht. Any idea why?

@YaLTeR it's probably being overzealous by default. l.sr.ht is my domain, I can assure you it's legitimate

@sir The fact that this is hard to determine is precisely why I'm skeptical about static linking as a sysadmin:

>It is also unknown if any of these vulnerabilities would have been introduced after the last build date for a given statically linked binary

@jfred my point is that even without the answer to this question, the impact is small, but if this question were answered, the impact could be virtually non-existent (and the question is answerable).

@sir Sure, but:

>Not including libc, the only libraries which had "critical" or "high" severity vulnerabilities in 2019 which affected over 100 binaries on my system were dbus, gnutls, cairo, libssh2, and curl.

If everything was statically linked, how sure would you be that you've actually updated those over 100 binaries? Yes, there are only a few libraries that are widely used, but it sounds like a royal pain to track all of them down when there is a major vuln.

@jfred because they're in the dependency list for the package.

@sir @jfred , I think he means, how can you ensure they're all updated upstream with the fix?

@indirection @jfred if upstream needs to make changes to address the security problem then dynamic linking isn't going to save you

@sir @jfred , I think the logic is, say if we're using Debian stable, they will all link against 1 lib, so only 1 lib needs to be updated.

My question now is: what disadvantages does static linking bring?

> dynamic linking won't save you

Yeah I was going to say this myself too 😛

@indirection @jfred static linking comes with a minor increase in binary size, which may be offset by not having to distribute the unused portion of the shared library, and a minor increase in update size, which may be offset by differential updates, and a large increase in rebuild times when a vulnerability appears, which may be offset by the usual delay between vulnerability discovery and announcement, during which distros are usually clued in and given time to prepare a mitigation.

@sir , I guess it's just a matter of time until the tooling is there, that is smart enough to do partial binary updates and only link necessary code.

Right now I know statically linked Rust binaries are huge. I'm not sure about other compiled languages. I'm assuming C/C++ linkers are smart today since they're "leading the way" in a sense.

@sir Okay, that's fair. There's lots of good distro tooling right now around dynamically linked packages and not a whole lot around statically linked ones. I will concede that, if the tooling were as good, static linking could be as good.

I use Guix right now for example; when a library's package definition gets updated, everything that depends on it is automatically recompiled if there's not already a binary package built on CI. It's hard to beat that peace of mind.

@jfred hahaha if you're already recompiling everything which depends on your shared library when it updates then what's the fucking point!

@sir Well, in the case of security patches in particular you/the maintainers would typically use grafts instead to be quicker about it: guix.gnu.org/manual/en/html_no

@jfred I see. Similar tooling could be made for static linking. Faster rebuilds could be accomplished by keeping the artifacts around, and faster distribution could be accomplished with binary patches. More sophisticated analysis could also be performed, and with cachable intermediates, to narrow down the list of affected programs. This would also be useful for shared objects, it could tell you what programs to reboot after a security update.

@sir Yeah - I'd love to see that tooling around static linking. I think my current preference for dynamic linking has more to do with the tooling that already exists around it than with static vs. dynamic linking itself.

@sir perf stat -r 1000 /bin/true

Will give even more statistics about the CPU load. There is also a time summary at the end with the variation.

@sir the problem with static linking is that it's basically what Electron does
@sir Yes, you can do your perfect, cute, minimalist software putting 10x the effort and experience. That's not how it goes for the absolute majority of people, including devs. And you'll end up with software that is not lightweight, not up-to-date and not secure, because many devs just won't bother and will use whatever runtime they find the easiest/fanciest, distribute it all in a single package on their own, and never ever update dependencies because it takes effort and can break existing things.

For Linux distros though, I guess static linking is not that bad, but I would prefer it with something like LTO.

@xerz absolutely fucking wrong, Electron ships an entire browser worth of features when you use maybe 40 of them.

@sir That *is* part of my point though, as *your* static linking is not what you'll often see on the wild.

@xerz there's no linker that stupid, give me a break, it's not the same thing

@sir It's not the linker that is the problem, it's not a technical problem per se. It's an usability issue where people tend to the path with the least friction, where as much has been done beforehand for them, and the tooling that is out there is not properly prepared for that way of thinking. Most people do *not* want to, or even CAN:

- Choose the set of tooling to use per application, piece by piece
- Constrain to a specific operating system or user interface
- Optimize resource usage and security over development time
- Distribute software through reliable channels, like a Linux distribution
- Keep it updated with the latest versions of libraries (as to avoid API or unexpected behavior changes)

That takes knowledge, that takes effort, that takes time. That is a huge investment for delivering what is, in essence, the same idea, in things that user-wise feel, at most, subtle.

So, people just throw in Electrons and React Natives.

@xerz do you even realize that the domain you're talking about is completely unrelated to the problems I'm addressing here

@sir It is related, though. Ultimately, you are better off having *some* of those dependencies being controlled by the OS in the form of dynamic libraries, than to have it all in a single, monolithic package. That's my point.
@sir so yes, dynamic linking is (imho) all about frameworks and distribution channels, not about the advantages and disadvantages of linking itself. Focus should be put on solving those, and for now I find dynamic linking better at that.

@xerz people are idiots no matter what. But I've argued many times before that the only valid way to install software is through your package manager, where the experts have taken care to make sure it is built correctly for your system. It is to these experts that I am talking - not the morons who are distributing broken shit themselves.

@sir Well, everyone is ultimately a human being with flaws and constraints, so I would not throw all of my cards at "I'm better than that, I'll always use this stack of higher quality software and I'll always be able to write and maintain efficient software that will always meet all of my (boss') demands".

Yes, in a finely tuned and controlled environment, static linking tends to work better. Like writing C or assembly, or any other first-hand technical choice. But everyone eventually needs to get things done faster or messes up in the process. At the end of the day, We Live In A Society™ and even if you don't have these issues, you'll have to work with other people and others' software. I guess it's a pretty philosophical thing in a way?
@sir I would say it's a "the world isn't ready yet" thing. Something to aim at, but please don't expect every one of us, even in FOSS fedi, to start using Oasis or Stali or statically-linked LFS.

@xerz "yes, this is better, but society is complicated therefore it doesn't matter"

go away

@sir Don't worry, I didn't say or imply that. As I was saying, it's more of a thing to work towards than "dynamic linking bad, static linking good, use it now or gtfo".

@xerz at no point did I ever say "use it now or gtfo"

@sir Well sorry then, that's the impression I got out of wanting at least distro packagers to go static. If it's a thing to aim at and gradually work on so it's done right, then sure enough.
@sir Ironically this just reminds me how I'm still angry at the Arch Haskell maintainers for breaking everything by trying to force dynamic linking on a platform that assumes static.

@xerz
@xerz @Azure @sir I had to go with Stack from day one when I started learning Haskell because it would fail to compile even a basic hello world application. :blobfoxangrylaugh:

@sir FFmpeg library users would love that.
Even with liberal ~3 years between ABI breaks, users still never bother to upgrade their code to use the new APIs.
If they could they'd just pin the version and never bother to upgrade ever.
But thanks to dynamic linking and distributions rejecting the windows dllhell, its upgrade or die - if their package stops working or compiling and there's no response, distributions will just drop it. Its happening now due to lavr.
So if anything it helps progress.

@sir Thanks for sharing this research! Where do we go from here? I've read a bit about distros like sta.li/ but that's about it.

@christianbundy we should start by loosening the requirements for dynamic linking in distros and gradually moving packages to static builds, starting where the impact is highest, and building better tools for the distribution of statically linked binaries

@sir I think 4.6% of the symbols used in average does not mean the linker will remove the rest. It only happens if .a consists of enough independent object files.

It is even possible that 1 symbol uses all the rest of the code.

@BartG95 LGPL permits static linking without being viral; and contrary to popular belief, the GPL requires virality even with dynamic linking

@BartG95 @sir >> static linking

How about LGPL licensed libraries?

What about LGPLʼed libraries? GNU Lesser GPL was specifically designed to level techicalities such as type of programming laguage, or type of linking, or anything.

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!