

On Tyranny: Twenty Lessons From the 20th Century by Prof. Snyder is a book that everyone should read these days.


On Tyranny: Twenty Lessons From the 20th Century by Prof. Snyder is a book that everyone should read these days.


That depends. Did he stop kissing his ring?
Frankly, if there was a transsexual M13 member that illegally crossed the border, murdering several border guards and in her way ate a cat, as long as she would express support of trump she wouldn’t have anything to worry about.
Three streaming (like pointed in the other comment) was my initial reaction too, but indeed at the time https for streaming would be very rare.
Another possibility is to realize that openssl isn’t just for communication, but also has implementation of cryptographic algorithms.
Perhaps openssl was used for validation of licensing key? For example they could sign the license with their private key and WinAmp could verify it’s authenticity with its public key.


It does. it does to this. That’s the docker image not the docker file. You are confusing the spec with the artifact. If you want reproducible dev envs you use a system like compose or any rad of other tools to launch images from your artifact store.
You use them, make sure they are always pristine and cleaned after use, don’t have network connectivity and other things that could affect the build.
Or you could use Nix which builds everything this way.
Notice that you mentioned additional systems to achieve that, you wouldn’t need them if docker was truly providing it.
LOL. We always have this problem if you have people only using spec files and not the artifacts. You are comparing apples to oranges by comparing the dockerfile to a build rpm package. Let me help you:
An rpm package == docker image
An rpm .spec file == dockerfile
You if you only give people spec files and have them rebuild the package you will get different hashes of the rpm file. Similarly you would likely not change your spec file between releases and know your rpm file is going to be different.
But that’s the whole point. A developer wants spec file to ALWAYS generate the same artifact. And most devs even believe that and get frustrated when it doesn’t (like in your example).
Nix basically solves that. It even removes the need for tools like artifactory, because there’s no longer need for it. The code fully defines the final binary. Of course you don’t want to rebuild everything every time, so a cache is introduced.
Before you say that it is just renaming artifactory. It really isn’t. It actually works like a cache. I can remove any piece of it, and the missing pieces will be rebuild if they are needed. It is also used by the builder, so it doesn’t repeat itself. I especially like it when working on feature branch and it completes the code. I eventually merge it, and if my merge did not modify code it won’t waste time rebuilding the same thing.


I see that too. Despite what most people say they aren’t truly interested in learning new things (at least things that would force them out of their comfort zones).
I mean if team tries to move out then there’s not much one can do.
Maybe they can look into using some tooling that whole isn’t nix, it uses nix under the hood and still prices some benefits.
I heard about DevBox and Flox. Those at least try to provide a reproducible dev environment (note, I haven’t used them myself as I feel that the abstraction they do places limits on nix functionality, but then others might see it as a benefit)
I also am getting impression that as time progresses things are getting smoother over time. With poetry2nix for example the big problem are packages that depend on C libraries, as those are not specified as python dependencies, so poetry2nix has a override file which adds them.
Previously I very frequently had to update and contribute new packages there. I was a bit away from python as was assigned to work on a Go project for half a year and now starting to work on another python project and when tried to use it and things just worked. All I had to do was to use latest poetry2nix and my project then compiled to a working container.


The dockerfile does not guarantee this, but the docker image or any OCI image does.
That’s true, but also misleading.
OCI image is like having an jpeg image. While Dockerfile is like the text prompt you write to ChatGPT to generate the image.
Yes every time you look at the jpeg, it is the same exact image, but that’s kind of obvious, the real problem is if you try the text query to ChatGPT you will get something slightly different every time.
Nix brings a true reproducibility. So in this analogy the same prompt brings the exact same image. This allows you to check on that prompt in your source control and if you mess up something there’s always a way back.
This is something docker promised, but never delivered.
Dockerfile should not be confused with the artifact.
It should not, but artifacts never had problem with mutating before we had docker. If you generate an rpm package and store it in an artifactory it always was the same exact package (unless someone overwrote it, lol)
Operationally we usually expect a dockerfile to be identical across many builds of different releases and know the artifact produced will have different code
But that’s basically the problem docker claimed to fix. This is also the problem that you frequently encounter with a pipeline that worked fine one day suddenly stopped working next day, because something that your Dockerfile referenced changed (maybe a new image was updated that broke something, you can lock things to specific hashes, but you need to be very conscious about that and in the wild I never seen anyone really doing it).
Anything you are doing with nix to make the lock files perfect is the same amount of work you’d be doing to any method of producing an OCI artifact.
It is not. Hashes are and lock files are built-in and Nix uses them by default.
If for example I use a flake, the flake.lock will hold the exact version of nixpkgs (package repo) in time. That happens without any additional effort. The poetry2nix converts poetry.lock file to nix packages that are once again locked in time, and that also happens behind the scenes.
The result is that all dependencies (python dependencies - from poetry.lock as well as the rest of the system (python, c libraries etc) - from flake.lock are all locked and in my repo. So everything is repeatable without effort on my side.
To repeat that with Dockerfile is much more challenging.
I do think your approach is interesting though. Certainly less effort than manually packing an OCI with something like buildpaks or trying to run through bazel to get your way through a distroless build (two other methods that don’t make massive images with a Debian base). And obviously ‘From:scratch’ in docker build land is a nightmare.
If you get your app build with Nix. The whole thing, including all of app’s dependencies are explicitly referenced so you can wrap it into a docker, an rpm file, OS image etc.
It’s controversial, but IMO nix is actually easier than what we are doing now. I think the problem is that it is a massive paradigm shift and what most people know what to do with existing technologies will generally be not useful, so you have to relearn everything.
But IMO it pays off. For example when starting a new project I can package the whole thing in 5 minutes. poetry2nix translates the project and it’s dependencies into nix packages and then since nix understands dependencies for my project it can package it automatically.


I started to use Nix to build containers that contain just my app and nothing else. The benefit of it is that it makes containers smaller, removes unused components (less potential attack vectors) and a container from a specific checked out version will always be identical (Dockerfile on its own (without extra work) doesn’t provide such guarantee). I also have the ability to customize python and dependencies to remove additional pieces that I don’t need (this unfortunately requires some experience with Nix, to know how to do it)
I wrote my own abstraction on top of poetry2nix and nix2container to remove need for boilerplate: https://github.com/takeda/nix-cde
The example shows how a hello world application can be packed and then how I can reduce its size further from 178MB to 68.9MB. This doesn’t include using musl to get the size even lower than that.
Though I totally agree with author about venv and that’s what I did before and still do in situations where I can’t use Nix. Venv is standardized and is much more predictable and prevents surprises.
There are some red flags for me:
Did you compile and use that on your phone or are you using the app in the app store?
Do we know how it does that. Signal is praised for security, but a lot of things it does feel iffy and don’t make me trust it.
To add to that. Russian government was demanding to be able to access messages or will ban Telegram in the country.
Did not hear anything beyond that, but Telegram continues to operate there.


If I had this requirement I would just generate a file of specific size, place it on one server and on the other I would have a shell script running via cron and measure the time it took to download the file.
It seems like a relatively simple problem.
BTW are you sure you want to test download speed and not latency? I think some routers might have the later built in.


So first of all, your mom is reluctant in letting others know where she lives. It has nothing to do with rights but with decency and respecting her wishes.
As when it comes to your rights, actually you have very little as an adult. Technically now your mom could say that you have to move out and if she did that you would be on your own even if that would mean being homeless.
Since you are so eager to go on a date, asking about your rights wrt your mom I think you likely don’t understand why your mom is concerned and sound like an easy prey to someone that can just use you and you will deeply regret shortly after.
Why not meet someone in normal circumstances (like school, work etc) instead dating strangers?
Remember that having additional privileges is a small part of being adult, much bigger are responsibilities that you get and consequences of bad decisions that you make.
Don’t start your adult life with something you might regret.
It’s funny that kids wish they were adults while adults wish they were kids again.


I didn’t think about it, though if that makes it harder to track it (can’t they just check the user agent?) could that actually be good, as the sites will never know exactly how many users they will lose, so might be more hesitant to pull the trigger?


Absolutely. If you think you can switch when chrome will be completely hostile it will be too late.
The reason they are trying those things in chrome is because the market share of Firefox is currently low. They are counting that you won’t have the option to run Firefox anymore, because sites will stop supporting it. Don’t let that happen.


We saw other similar news from China which turned out to be a bunk. I wouldn’t hold my breath. I would love to be wrong though.


This is not “perfect is enemy of good” it would be if I was arguing about MIT vs GPL etc.
By signing CLA you’re surrendering copyright to the company and this allows them do do whatever they wish with your contribution, including switching back to closed source.
Hashicorp was able to change license of their products exactly thanks to CLA.


Yes, thanks for pointing it out. As long as it is some organization that can’t be bought it should be fine. I didn’t included that because it makes my response more confusing.
Essentially CLA gives the entire copyright to specific entity and that entity in case of FSF it likely could use it for fighting violations, while some startup likely intends to change license when their product gets more popular to cash out on it (for example what Hashicorp did recently before selling to IBM)


They just want to get profit from the purchase but they are no longer competitive.
Looks like they are looking for suckers to contribute to their code base for free without even making it actually open source.
IMO at this point WinAmp does not offer anything beyond name recognition and nostalgia. Isn’t qmmp essentially an open source version of WinAmp?
Yeah, I believe he mentioned that play in this book. In Road to Unfreedom book he also mentions how LGBT basically took place of Jews. It tries to frame itself as it is for traditional Christian values, in reality it is just fascism that tries to use religion. Fascism needs an enemy, and this one is easy to attack.