mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

Following up on mdlbear | Welcome, tumblr refugees: this might otherwise have just been a longish section of next Sunday's "done" post, but the Tumblr apocalypse (tumbling-down?) is happening now and I wanted to get tumblr_backup.py out there. (It's a tumblr backup script, via this tumblr post by greywash, who notes that the original post by Greymask has disappeared). I think some of my readers will find it useful.

It's also worth noting greywash | State of the Migration: On fannish archival catastrophes, and what happens next (by way of ysabetwordsmith; I saw this someplace else last week, but apparently didn't log it.)

More meta stuff:

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

Tried to log in on my file server last week and found out that the hard drive was dead. Finally went to Fry's yesterday, and bought a couple of Western Digital red (NAS) 2TB drives. Designed for continuous duty, which would be a good thing. Disassembled the lock on the docking bay I had the backup drive in (and promptly found the key, lurking in what had been my nightstand).

Confirmed that the backup works and the old main drive doesn't, and installed the latest Debian. Which only took about an hour. It boots fast as a bat, and ships with a driver for the Realtek ethernet controller on my motherboard. So I can free up the PCI slot for something more useful, like maybe an ESATA/USB-3 card, if I can find one.

Now begins the tedious process of restoring (done, as of this evening) and reconfiguring. Which will take time because I want to make some long-overdue changes in the config.

It looks like the last time a backup was made was June 25th. I don't *think* I did much, if anything, since then except maybe add a couple of passwords to the keychain. And of course I've lost a lot of email. If you sent anything to steve at thestarport.org in the last couple of months, I haven't seen it. (It is now forwarded to my gmail account, along with steve at savitzky.net which I've been doing pretty well at keeping up with.)

It's possible that some of the transient stuff can be rescued from the old drive -- it seems to run ok for a few minutes before suddenly going offline. Not entirely clear that it's worth bothering with.

Apart from that... Colleen has been getting physical therapy three times/week, and is now able to stand up and transfer into her power chair. Progress. Her caregiver is an excellent cook -- Thai, Chinese, and Japanese, with an emphasis on lean and low sodium. Yum!

Links in the notes, as usual. One, found by a coworker after I'd mentioned something to that effect, is one of my favorite stats: iPad 2 as fast as Cray 2 supercomputer. I also dropped a donation on YsabetWordsmith's poem, "Part of Who I Am". Some great links there, too.

raw notes )

Backing up

2007-11-17 12:11 pm
mdlbear: (hacker glider)

So I finally decided to get serious about off-site backups: i.e., stop planning and start doing. This was assisted by the fact that work finally got around to installing a second T1 line yesterday -- my upstream bandwidth at home is barely sufficient to keep up with incremental backups; it would be hopeless for uploading the roughly 80GB already on the fileserver and needing to be backed up. (There's a lot that doesn't need to be backed up, fortunately.)

Sometime last Friday I dragged home a bare 500GB drive that was sitting around at work (originally intended for an outside-the-firewall server that never quite got off the ground), stuck it into a USB/eSATA enclosure, and loaded it up. Yesterday I mounted it on my desktop machine, and started uploading to my server at Dreamhost last night. Got about 250MB/s, which works out to about 890MB/h.

I'm doing it in pieces, of course: the web master directories last night, then my working directories today -- which amount to about 10GB, excluding the Audacity projects. Those are another 60GB -- I'll do those a little bit at a time, at night, with bandwidth limiting.

At that point, the only thing left will be the /home partition -- I can't do that until I have my planned encryption scheme in place. (Although in the interim I can fake it with an encrypted tar file.)

more details, for the techies )

Hopefully I'll have everything uploaded by the end of the year, which would be nice.

mdlbear: (hacker glider)
flyback - Google Code
Apple's Time Machine is a great feature in their OS, and Linux has almost all of the required technology already built in to recreate it. This is a simple GUI to make it easy to use.
(from this post on slashdot.)

I just upgraded my work laptop to Leopard yesterday, and fired up Time Machine because, well, automatic incremental backups are a Good Thing. I was intrigued to find, though, that it's not really doing anything special: behind that pretty interface is a directory tree with pathnames like nodename/yyyy-mm-dd-hhmmss. Whee! It keeps hourly backups for 24 hours, daily backups for a month, and weekly backups until you run out of space on your backup disk, at which point it presumably throws up its hands and begs for more storage.

Apart from the naming conventions and intervals, that's pretty close to what I've been doing with rsync for the last couple of years on Linux. What took them so long?

(eta: Other, similar packages for Win$ and Linux include BackupPC and Dirvish. What are you using?)
mdlbear: (hacker glider)

Did backups this morning using the new SATA backup drive and new scripts. Fast as a bat: 10 minutes for 273GB of data.

I still haven't done the rest of the associated reorganization; I just wanted to get a snapshot of the current state.

geeky details: the next steps )

Backups

2007-06-17 12:29 pm
mdlbear: (hacker glider)

Set up a massive file transfer to my shiny new backup drive last night and went to bed; I was rather disturbed to come into the office this morning and find an I/O error on the screen, and the OS unable to find the drive. Gleep!

I took the drive out of the USB enclosure, powered down, put it in Trantor's case, powered up, and was greatly relieved to find the drive up and running. A thorough fsck and a fresh rsync confirmed that all data was present and accounted for. I'm guessing it may have been a glitch somewhere in the external box's USB interface or the cable. Not going to worry about it much. (eta: power-cycling the drive enclosure didn't work; I didn't try rebooting or power-cycling the computer with the drive still external; that would probably have worked, I was just impatient.)

I'm still trying to resist the temptation to do more work on reorganizing my directory tree and setting up the offsite backups.

mdlbear: (hacker glider)

Meanwhile, my disk test is finally on its final write pass. At roughly 3.5 hours per pass, I'm guessing sometime between 2 and 3am. (ETA: 03:56:13, as it turns out) I'm really enjoying having an OS that's stable enough that you can run a two-day I/O-bound process without having to worry about anything more likely than a possible power failure. (Not entirely unlikely, though -- we've had two at work so far this season. There's a reason why my machines are on APC UPSs.)

mdlbear: (hacker glider)

... part mumble. Just before leaving for six days of Westercon, I very sensibly put my backup drive in another room and made a second set of backups (just of the important stuff: /home, /local/starport, and /mm/record) on yet another drive that I had lying around. Came back, retrieved the backup drive, and did a full backup. The system hung when I tried to unmount it.

Oops.

Taking this as a Bad Sign, I did an fsck after the reboot, and sure enough the disk was fscked up, though not too badly. From the dates on the inodes in lost+found, I'd say I had a couple of corrupted directories due to a crash back in 2005. Redid the current backups, and all is well for the moment. But I was very glad of the spare backup disk -- things could have been much worse.

But a corrupted directory can potentially cause an arbitrary amount of data to go kablooie, or at least become very hard to recover. My current nefarious plan to back up remotely using encrypted blobs has a similar problem unless there's enough reduncancy in the system to ensure that I never lose all the copies of any one blob. (It's still somewhat safer because blobs -- even directory blobs -- are immutable and so never have to be rewritten. Hmm: log-structured blob store?

mdlbear: (hacker glider)

This post by Mark Pilgrim, along with a (unfortunately friends-locked) post by my nephew [livejournal.com profile] asavitzk, got me thinking about backups again. I'm doing OK, but I can do better.

the current setup: hot and cold running backups )

The current setup, with "hot" daily mirroring and "cold" weekly backups and monthly archives, works pretty well. It isn't disaster-proof, though.

Now, here's the plan: going off-site )

A lot of what I work on is public, or at least semi-public: websites, recorded songs, and the like. That gets offsite "backups" automatically, but it needs a little more work.

publish and be damned )

Update: a slightly modified version of this post can be found on my website under the title Keeping Backups

mdlbear: (hacker glider)

My new backup script seems to be working -- about 8 minutes to mirror a day's changes in over 100GB of files. Still needs to be parametrized, then I'll write it up and put it up on my website. And I really need to move the Debian mirror to a larger disk on the gateway; it's occupying 70GB that I'm going to be needing soon.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

Finally got my daily backup script up and installed on the fileserver. Basically all it does is mount the backup drive and all of its partitions (which are identical to the ones on the main drive), mirror each partition with rsync, and unmount the backup drive.

It still needs to be parametrized better -- right now it's specialized for that particular set of partitions.

mdlbear: (hacker glider)

I've been rearranging directories in my public website, and corresponding directories on the fileserver. The most recent operation was to move theStarport.com/people/steve/Doc/ to theStarport.com/Steve_Savitzky/. Everything went well; I did the move, made the corresponding move in the CVS repository (it's done using a one-line find command), fixed up the Makefiles (two similar one-liners), and then went to move the latest backup directory so that the next rsync wouldn't have to copy all the sound files and other bulky stuff.

That's when I noticed that /bak/usr/local/starport was a symlink. I'd installed a new, large disk on the fileserver a little over a month ago, and moved /usr/local into a separate partition called /local. I then made a new /usr/local just for the fileserver. I was backing up the new directory, which was in the same old place, but not the new partition. Oops.

No real harm done -- there haven't been many changes since late March when I installed the new disk. Except for the major changes I made this weekend, and I've backed all that up now.

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated 2019-04-22 02:55 pm
Powered by Dreamwidth Studios