Log in

No account? Create an account

Why Non-Distributed Systems Suck

« previous entry | next entry »
Feb. 1st, 2008 | 12:56 pm

I have a patch that I just emailed to an upstream committer.

The main repository is in Subversion.

Now I want to work on a second patch.

What to do?

Toss my current patch out, revert, and work on another patch.

If this was a distributed system?

I would just commit locally. I would then work on a new patch. When the patch was committed to the main system I would just pick it up with a pull. I would never notice the issue because the remote patch would come in and be merged locally.

And what about the process where I work on a patch in pieces and commit along the way?

Well you can forget about that with a non-distributed system.

The solution to my problem? It looks like Fedora has SVK packages. With those I should be able to get around all the limitations for the remote server being Subversion.

What do I want?

The remote server to be Mercurial (or any other modern system...).

If you go back a decade I adored CVS. Compared to the options at the time it was a winner.

Today? Not so much.

Link | Leave a comment |

Comments {10}


(no subject)

from: dormando
date: Feb. 2nd, 2008 08:56 am (UTC)

Real shame. I hear people are still frustratingly working on it :)

I've used mercurial a bit; but it's been almost only with memcached.

I got into git a few years ago when SVN shit the bed at Gaia. At the time a few friends were using it and a few projects I followed were as well (such as the linux kernel).

Eh, it has absolutely killer performance and blew through all of the crazy multi-thousand-file merges we were doing at gaia, and the bulk of the projects I follow are still git. In the years since the interface has become a lot more solid, and it's still really fast.

I see the two as being interchangeable. I'll be quicker with git until I learn all of the "behavioral aliases" for mercurial.

Reply | Parent | Thread