The problems inherent in dynamic linking became apparent to the industry years ago, but instead of doing the logical thing and moving back to static linking, they invented Docker
I think Docker was created as people took an old programmer`s joke too seriously.
Customer: Software does not work!
Developer: It works on my machine!
Manager: So we just ship your machine to the customer. Problem solved.
@sir wait till they discover the problem with bundling dependencies again. Then they will invent dynamic linking across docker images or sth.
@wolf480pl they already have, and they called it docker compose
@wolf480pl useful for when shipping the entire OS to support your one application isn't enough, and you instead want to ship 5 operating systems
@sir btw. could we, like, ship libraries and executables as .o, and link them at install time?
@wolf480pl yeah, but why should we
@sir hm... you're right, if it's the distro shipping the binaries then we don't have to. The distro maintainers will rebuild all the necessary packages.
@sir doesn't that operate on TCP level?
I'm thinking weird LD_PRELOAD hacks that make one docker container load openssl from another docker container (over HTTP) because the first container's author couldn't be assed to update openssl
@wolf480pl I haven't seen anything like that but I wouldn't be surprised
@sir I haven't seen it either, but it could be just around the corner
https://packages.debian.org/buster/busybox-static - 0.9 MB packaged/ 2MB installed, Google ships 0.35 MB compressed/ 0.95 MB uncompressed files on the search engine homepage
Source (just yesterday):
@sir Docker really solves problems that should not really exist in the first place.
@email@example.com Why use a long known method that will use disk space which has become cheaper & cheaper until being well nigh disposable when you can use slap some new OS features and software tooling together, give it a fun name and logo, and get to explore all sorts of new and exciting unknowns? Ideally while not heavily exploring the experience of others who have done similar things in other environments so as to enhance the excitement. What do you want? Stability and reliability?!
@sir Docker isn't just for bundling dependencies, it also provides excellent security because each container is isolated from the next. So if an attacker compromises one application on a server the other applications can't be compromised unless the attacker figures out a way to break out of the Docker container.
@jwinnie sorry, I was upset about something else when I wrote that other reply.
Docker provides awful security. It's a glorified chroot and other tools have done it much, much, much, much better.
@sir What about Docker's networking system, running each container within its own, isolated subnet? If you ran an application inside a chroot it would still be able to access the networking stack of the host machine, right?
@sir Also Docker containers have a separate set of sockets so you can't mess with the host system's D-Bus services or init system from within the container, right?
Is there any service that's better than Docker that offers this level of isolation?
@jwinnie Docker is just an ugly wrapper around lxc, look it up.
@sir Ah okay. So you're saying lxc is good, but Docker is not? What's the reasoning behind that?
@sir Static linking and bundling dependencies into container make it very difficult to have automated, coordinated and properly tested security updates for libraries. For example containers on docker hub are a security dumpster fire.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!