tag:dreamwidth.org,2010-04-27:505737The Mandelbear's Musingsmdlbearmdlbear2020-10-23T17:59:22Ztag:dreamwidth.org,2010-04-27:505737:1742138Keeping backups2020-10-23T17:22:29Z2020-10-23T17:59:22Zowls somewhere outsidedidacticpublic3<p> It's been a while since I described the way I do backups -- in fact, <a href="https://stephen.savitzky.net/Doc/Linux/keeping-backups/">the only
public document</a> I could find on the subject was written in 2006, and
things have changed a great deal since then. I believe there have been a
few mentions in Dreamwidth and elsewhere, but in this calamitous year it
seems prudent to do it again. Especially since I'm starting to feel
mortal, and starting to think that some day one of my kids is going to
have to grovel through the whole mess and try to make sense of it.
(Whether they'll find anything worth keeping or even worth the trouble of
looking is, of course, an open question.)
<p> My home file server, a small Linux box called Nova, is backed up by simply
copying (almost -- see below) its entire disk to an external hard drive
every night. (It's done using <code>rsync</code>, which is efficient
because it skips over everything that hasn't been changed since the last
copy.) When the disk crashes (it's almost always the internal disk,
because the external mirror is idle most of the time) I can (and have,
several times) swap in the external drive, make it bootable, order a new
drive for the mirror, and I'm done. Or, more likely, buy a new pair of
drives that are twice as big for half the price, copy everthing, and
archive the better of the old drives. Update it occasionally.
<p> That's not very interesting, but it's not the whole story. I used to make
incremental backups -- instead of the mirror drive being an exact copy of
the main one, it's a sequence of snapshots (like Apple's Time Machine, for
example). There were some problems with that, including the fact because
of the way the snapshots were made (using <code>cp -l</code> to copy
directories but leave hard links to the files that haven't changed) it
takes more space than it needs to, and makes the backup disk very
difficult -- not to mention slow -- to copy if it starts flaking out.
There are ways of getting around those problems now, but I don't need
them.
<p> The classic solution is to keep copies offsite. But I can do better than
that because I already have a web host, and I have Git. I need to back up
a little.
<p> I noticed that almost everything I was backing up fell into one of three
categories:
<ol>
<li> Files I keep under version control.
<li> Files (mostly large ones, like audio recordings) that never change
after they've been created -- recordings of past concerts, my
collection of ripped CDs, the masters for my CD, and so on. I
accumulate <em>more</em> of them as time goes by, but most of the old
ones stick around.
<li> Files I can reconstruct, or that are purely ephemeral -- my browser
cache, build products like PDFs, executable code, downloaded install
CDs, and of course entire OS, which I can re-install any time I need to
in under an hour.
</li></li></li></ol>
<p> Git's biggest advantage for both version control and backups is that it's
distributed -- each working directory has its own repository, and you can
have shared repositories as well. In effect, every repository is a
backup. In my case the shared repositories are in the cloud on <a href="https://dreamhost.com/">Dreamhost</a>, my web host. There are
working trees on Nova (the file server) and on one or more laptops. A few
of the more interesting ones have public copies on GitLab and/or GitHub as
well. So that takes care of Group 1.
<p> The main reason for using incremental backup or version control is so that
you can go back to earlier versions of something if it gets messed up.
But the files in group <em>don't</em> change, they just accumulate.
So I put all of the files in Group 2 -- the big ones -- into
the same directory tree as the Git working trees; the only difference is
that they don't have an associated Git repo. I keep thinking I should set
up <a href="https://git-annex.branchable.com/">git-annex</a> to manage
them, but it doesn't seem necessary. The workflow is very similar to the
Git workflow: add something (typically on a laptop), then push it to a
shared server. The Rsync commands are in a Makefile, so I don't have to
remember them: I just <code>make rsync</code>. (Rsync doesn't copy
anything that is already at the destination and hasn't changed since the
previous run, and by default it ignores files on the destination that
don't have corresponding source files. So I don't have to have a
<em>complete</em> copy of my concert recordings (for example) on my
laptop, just the one I just made.)
<p> That leaves Group 3 -- the files that don't have to be backed up because
they can be reconstructed from version-controlled sources. All of my
working trees include a Makefile -- in most cases it's a link to <a href="https://gitlab.com/ssavitzky/MakeStuff">MakeStuff</a>/Makefile --
that builds and installs whatever that tree needs. Programs, web pages,
songbooks, what have you. Initial setup of a new machine is done by a
package called <a href="https://gitlab.com/ssavitzky/Honu">Honu</a>
(Hawaiian for the green sea turtle), which I described a little over a
year ago in <a href="https://mdlbear.dreamwidth.org/1688029.html">Sable
and the turtles: laptop configuration made easy</a>.
<p> The end result is that "backups" are basically a side-effect of the way I
normally work, with frequent small commits that are pushed almost
immediately to a shared repo on Dreamhost. The workflow for large files,
especially recording projects, is similar, working on my laptop and
backing up with Rsync to the file server as I go along. When things are
ready, they go up to the web host. Make targets <code>push</code> and
<code>rsync</code> simplify the process. Going in the opposite direction,
the <a href="https://gitlab.com/ssavitzky/MakeStuff/-/blob/master/scripts/pull-all">pull-all</a> command updates everything from the shared repos.
<p> Your mileage may vary.
<h3>Resources and references</h3>
<ul class="resource-list">
<li> <a href="https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git.html">git(1) manual page</a>
<li> <a href="https://rsync.samba.org/documentation.html">rsync documentation</a>
<li> <a href="https://gitlab.com/ssavitzky/Honu">Honu</a>
<li> <a href="https://stephen.savitzky.net/Doc/Linux/keeping-backups/">Keeping
Backups</a> (2006)
</li></li></li></li></ul>
<p class="colophon"> <em>Another fine post from
<a href="https://mdlbear.dreamwidth.org/tag/curmudgeon">The Computer Curmudgeon</a> (also at
<a href="https://computer-curmudgeon.com/">computer-curmudgeon.com</a>).<br>
Donation buttons in <a href="https://mdlbear.dreamwidth.org/">profile</a>.</em></p></p></p></p></p></p></p></p></p></p></p><br /><br /><img src="https://www.dreamwidth.org/tools/commentcount?user=mdlbear&ditemid=1742138" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/> commentstag:dreamwidth.org,2010-04-27:505737:1650080More from the rabbit-hole2018-12-11T18:01:57Z2018-12-11T18:01:57Zmetapublic3<p> Following up on <a href="https://mdlbear.dreamwidth.org/1649182.html">mdlbear | Welcome, tumblr refugees</a>: this might otherwise have just
been a longish section of next Sunday's "done" post, but the Tumblr
apocalypse (tumbling-down?) is happening <em>now</em> and I wanted to get
<a href="https://paste2.org/3cwm4MxJ">tumblr_backup.py</a> out there.
(It's a tumblr backup script, via <a href="https://greywash.tumblr.com/post/180779871852/greywash-so-the-original-post-about-the-greymask">this tumblr post by greywash</a>, who notes that the original post by
Greymask has disappeared). I think some of my readers will find it
useful.
<p> It's also worth noting <a href="https://greywash.dreamwidth.org/46038.html">greywash | State of the
Migration: On fannish archival catastrophes, and what happens next</a> (by
way of <a href="https://ysabetwordsmith.dreamwidth.org/11647191.html">ysabetwordsmith</a>; I saw this someplace else last week, but apparently
didn't log it.)
<p> More meta stuff:
<ul>
<li> a remark by <a href="http://dragon-in-a-fez.tumblr.com/post/181000026186/apricops-apricops-its-quite-likely-no">apricops: "It’s quite likely no...</a> coincidence that that most
‘mismanaged’ and least profitable social media site is also the one that
turned out to be most amenable to the formation of actual communities"
<li> <a href="https://pangodillo.dreamwidth.org/641.html">pangodillo | What
Dreamwidth lacks</a> is the ability to use tags as "whisperspace".
</li></li></ul></p></p></p><br /><br /><img src="https://www.dreamwidth.org/tools/commentcount?user=mdlbear&ditemid=1650080" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/> commentstag:dreamwidth.org,2010-04-27:505737:1502902Done recently (20130910 Tu - 0915 Su)2013-09-17T04:59:39Z2013-09-17T04:59:39Zcalmpublic5<p> Tried to log in on my file server last week and found out that the hard
drive was dead. Finally went to Fry's yesterday, and bought a couple of
Western Digital red (NAS) 2TB drives. Designed for continuous duty, which
would be a good thing. Disassembled the lock on the docking bay I had the
backup drive in (and promptly found the key, lurking in what had been my
nightstand).
</p>
<p> Confirmed that the backup works and the old main drive doesn't, and
installed the latest Debian. Which only took about an hour. It boots
fast as a bat, and ships with a driver for the Realtek ethernet controller
on my motherboard. So I can free up the PCI slot for something more
useful, like maybe an ESATA/USB-3 card, if I can find one.
</p>
<p> Now begins the tedious process of restoring (done, as of this evening) and
reconfiguring. Which will take time because I want to make some
long-overdue changes in the config.
</p>
<p> It looks like the last time a backup was made was June 25th. I don't
*think* I did much, if anything, since then except maybe add a couple of
passwords to the keychain. And of course I've lost a lot of email. If
you sent anything to steve at thestarport.org in the last couple of
months, I haven't seen it. (It is now forwarded to my gmail account,
along with steve at savitzky.net which I've been doing pretty well at
keeping up with.)
</p>
<p> It's possible that some of the transient stuff can be rescued from the old
drive -- it seems to run ok for a few minutes before suddenly going
offline. Not entirely clear that it's worth bothering with.
</p>
<p> Apart from that... Colleen has been getting physical therapy three
times/week, and is now able to stand up and transfer into her power
chair. Progress. Her caregiver is an excellent cook -- Thai, Chinese,
and Japanese, with an emphasis on lean and low sodium. Yum!
</p>
<p> Links in the notes, as usual. One, found by a coworker after I'd
mentioned something to that effect, is one of my favorite stats: <a href="http://www.electronista.com/articles/11/05/10/ipad.2.benches.as.fast.as.cray.2.from.1985/">iPad 2 as fast as Cray 2 supercomputer</a>. I also dropped a donation on
<a href="http://ysabetwordsmith.livejournal.com/2953055.html">YsabetWordsmith's poem, "Part of Who I Am"</a>. Some great links
<em>there,</em> too.
</p>
<span class="cut-wrapper"><span style="display: none;" id="span-cuttag___1" class="cuttag"></span><b class="cut-open">( </b><b class="cut-text"><a href="https://mdlbear.dreamwidth.org/1502902.html#cutid1">raw notes</a></b><b class="cut-close"> )</b></span><div style="display: none;" id="div-cuttag___1" aria-live="assertive"></div><br /><br /><img src="https://www.dreamwidth.org/tools/commentcount?user=mdlbear&ditemid=1502902" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/> comments