Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reasons:

> Improved compatibility with other Unixes/Linuxes in behavior:

Well, some of them.

> Improved compatibility with other Unixes (in particular Solaris) in appearance:

I guess, but if you supported Linux you already supported Solaris, so if you think of Solaris as second class then nothing gained.

Also: Solaris is (to me) clearly on its way out. Not worth a migration to converge with it.

> Improved compatibility with GNU build systems:

This one is pure hogwash. Either you need to support the systems that don't do this (e.g. OpenBSD), or you only care about Linux. (this is a bit simplified, but pretty true)

So your build system still needs to support non-merged.

Except now you're going to bake in Linux-specific merge assumptions into your build system, so that they'll be even more broken.

> Improved compatibility with current upstream development:

This seems more like tech debt to me, in the name of expedience.

I'm here not saying it shouldn't be done. I'm saying these are terrible reasons.



> > Improved compatibility with GNU build systems:

> This one is pure hogwash. Either you need to support the systems that don't do this (e.g. OpenBSD), or you only care about Linux. (this is a bit simplified, but pretty true)

> So your build system still needs to support non-merged.

> Except now you're going to bake in Linux-specific merge assumptions into your build system, so that they'll be even more broken.

Well considering

> Not implementing the /usr merge in your distribution will isolate it from upstream development. It will make porting of packages needlessly difficult, because packagers need to split up installed files into multiple directories and hard code different locations for tools; both will cause unnecessary incompatibilities. Several Linux distributions are agreeing with the benefits of the /usr merge and are already in the process to implement the /usr merge. This means that upstream projects will adapt quickly to the change, those making portability to your distribution harder.

I think they intended this to play out like SystemD/logind - Red Had maintainers will only care about merged /usr and if you want to use anything they have their hands in you better adapt.


> I think they intended this to play out like SystemD/logind - Red Had maintainers will only care about merged /usr and if you want to use anything they have their hands in you better adapt.

The argument this article seems to make is that you can essentially hard code the paths to binaries in your build system, because /lib is /usr/lib, /bin is /usr/bin (from build script point of view it doesn't matter which is symlink to which).

It seems to be saying that your build systems no longer need to search for the libraries.

But that's not true. You still do.

And if you search for it then it doesn't really matter where it is. Clearly if you search for `bash` you'll search /bin and /usr/bin. For at least two reasons:

1) Maybe you're running a system that isn't merged. Then it could be in either place. Doesn't matter if RedHat is merged.

2) You'll need to check other locations anyway. Like /usr/local/bin/bash, to support some BSDs. Why does it matter if it's two, three, or $(echo $PATH | sed 's/[^:]//g' | wc -c) paths?

And for libraries you may need to search in /opt, and run pkgconfig for any compile and link flags needed. Why does it matter that `pkg-config --libs foo` contains an -L flag? You still need to run it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: