mdlbear: (hacker glider)

1: The Turing Machine

So, Wednesday I looked at Wikipedia's front page and saw, under the "On this day" heading:

1936 – English mathematician Alan Turing published the first details of the Turing machine (model pictured), an abstract device that can simulate the logic of any computer algorithm by manipulating symbols.

It was the ""model pictured" that grabbed me. The caption was/is "A physical Turing machine model. A true Turing machine would have unlimited tape on both sides, however, physical models can only have a finite amount of tape."

I knew that -- everyone who studies computer science knows that, and a few have dreamed, as I had, of building a physical model. I even figured out how to build one out of wood, minus a few details. But there it was.

(If you're not interested in the details, you can skip this and the other indented blocks. But I digress...)

A Turing Machine is a remarkably simple device. It has a read head, a write head, a strip of tape that they operate on, and a controller with a finite number of states. It can read what's on the tape -- the classic machine uses blank, "0", and "1". (Some versions use "X" instead of "1", and some dispense with "0" and just have 1 and blank. That makes programming them a little more complicated, but not by much. Some have more symbols. It doesn't matter -- you can program around it.) The machine can move the tape backward and forward. Numbers are usually represented in Unary, so you count "1", "11", "111", ..., although with both 1 and 0 you could use binary, and some versions do.

The machine has a "state", which selects a line in the machine's program that tells it what to write, which direction to move the tape, and which state to go to next, depending on what symbol the read head is looking at. (Think of the table as a drum with pegs on it, like a music box.)

That's it. That's all you need to compute any function that can be computed by any kind of mechanical or digital computer. Of course you may need a lot of tape -- so you need to attach it to a tape factory -- and a lot of time.

The critical thing is that it's possible to design a universal Turing machine: it takes a tape, and the state table of a Turing machine (in 1's, 0's and blanks), and it uses that description to do exactly what that machine is programmed to do. Turing's big accomplishment was using the universal Turing machine to prove that there some things that a computer can't do, no matter how much time and tape you give it.

But of course I was much more fascinated by the machines, starting at the website of the model that first grabbed my attention., and proceeding to a Turing machine made of legos. I spent some time in the Turing machine gallery. But the rabbit hole went deeper than that.

2: The Universal Constructor

At about that point it ocurred to me to look at the Wikipedia page for the Von Neumann universal constructor. Because once you have a kind of machine that can simulate itself, the natural question is whether you can have a machine that can build a copy of itself.

The trivial answer to this question is "Yes, of course. Cells have been reproducing themselves for billions of years." But in the 1940s when von Neumann was thinking about this, the structure of DNA had not yet been determined -- that was 1953 -- and although it had been known since the late 1920s that DNA had something to do with heredity, nobody knew how it worked. So his insight into the machinery of reproduction was pretty remarkable.

Like Turing's insight into the machinery of computation, von Neumann's insight into the machinery of reproduction was to separate the machine -- the Universal Constructor -- from the description of what it was to construct, stored on something simple -- a tape.

Von Neumann's machine was/is a cellular automaton; it "lived" (if you can call it that) on a grid of squares, where each square can be in one of 29 different states, with rules that tell it what to do depending on the states of its neighbors. A completely working machine wasn't simulated until 1995. Its constructor had 6329 32-state cells, and a tape with a length of 145,315. It took it over 63 billion timesteps to copy itself. (Smaller and faster versions have been constructed since then).

At, say, 1000 steps/second, that would have taken over two years. It wasn't until 2008 that a program, Golly, became able to simulate it using the hashlife algorithm; it now takes only a few minutes.

Which led me even further down the rabbit hole. Because no discussion of cellular automata would be complete without Conway's Game of Life.

3: The Game of Life

It's not really a game, of course, it's a cellular automaton. Each cell in the square grid is either dead or alive. You start with an arrangement of live cells, and turn them loose according to four simple rules:

  1. If a live cell has fewer than two live neighbors (out of the 8 cells surrounding it), it dies of loneliness.
  2. A live cell with two or three live neighbors, stays alive.
  3. A live cell with more than three live neighbors dies of overpopulation.
  4. A dead cell with exactly three live neighbors becomes live.

I first encountered the game in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column. The Wikipedia article gives a good introduction.

Patterns in Life evolve in a bewildering variety of ways. Many of them die out quickly -- an isolated cell, for example. Some patterns sit there and do nothing -- they're called "still lifes". A 2x2 block of cells for an example. Some blow up unpredictably, and may or may not leave a pile of random still lifes behind. Some patterns oscillate: a horizontal row of three cells will become a vertical row in the next turn, and vice versa -- it's called a "blinker".

And some patterns move. The simplest, called a "glider", appears in this post's icon. You can crash gliders into blocks or gliders into gliders, and depending on the timing they will do different interesting things. It didn't take people long to figure out that you can build computers, including a universal Turing machine. Or a machine that prints out the digits of Pi.

Or a universal constructor.

4: The universal constructor

While I was falling into this rabbit hole, I happened to remember a passing mention of a universal constructor that can build anything at all out of exactly 15 gliders. (Strictly speaking, anything that can be constructed by crashing gliders together. Some patterns can't be made that way. But almost all the complicated and interesting ones that people are building these days can.) If this intrigues you, go read the article. Or wait until the next section, where I finally get to the bottom of the rabbit hole.

On the way down I encountered lots of weird things -- the aforementioned universal Turing machine and Pi printer, and a variety of "spaceships" that travel by, in effect, repeatedly constructing a new copy of themselves, then cleaning up the old copy. It took a while for me to get my head around that.

Then, sometime Wednesday evening, I found the book.

5: The Book of Life

It's not called "The Book of Life", of course, it's called Conway's Game of Life: Mathematics and Construction. But you get the idea. You can download the PDF.

The book ends with a pattern that simulates a Life cell. There are several different versions of this; this is the latest. It works by making copies of itself in any neighboring cells that are coming alive, then destroying itself if it's about to die. Wild.

Another fine post from The Computer Curmudgeon (also at computer-curmudgeon.com).
Donation buttons in profile.

mdlbear: (technonerdmonster)

Note: Despite being posted on a Saturday and a title that includes the name of a a character from a well-known musical, this is not a Songs for Saturday post. It doesn't have anything to do with fish, either.

Remarkably, Joseph Weizenbaum's original source code for ELIZA has been rediscovered, after having been missing and believed lost for over half a century, and was made public on May 23rd of this year. ELIZA is probably the oldest and almost certainly the best-known implementation of what is now known as a chatbot.

If you decide to look at the code, start by reading the web page it's embedded in before you dive into the listing. The "Notes on reading the code" section, which comes after the listing, will prevent a lot of confusion. The listing itself is a scan of a 132-column listing, and definitely benefits from being viewed full-screen on a large monitor.

The first thing you see in the listing is the script -- the set of rules that tells the ELIZA program how to respond to input. The program itself starts on page 6. You might be misled by the rules, which are in the form of parenthesized lists, into thinking that the program would be written in LISP. It's not; it's written in MAD, an Algol-like language, with Weisenbaum's SLIP (Symmetric List Processing) primitives embedded in it.

SLIP uses circular, bidirectionally-linked lists. Each list has a header with pointers to the first and last list element; the header of an empty list points to itself. I've lost track of how many times I've implemented doubly-linked lists, in everything from assembly language to Java.

ELIZA is the name of the program, but "Eliza" usually refers to the combination of an Eliza-like program with the Doctor script. The most common script is a (rather poor) simulation of a Rogerian psychotherapist called "Doctor". According to the note at the bottom of the Original Eliza page, actual Rogerian therapists have pronounced it a perfect example of how not to do Rogerian therapy. Nevertheless, many people are said to have been helped by ELIZA, and it's possible to have a surprisingly intimate conversation with her as long as you suspend your disbelief and respect her limits.

If you have Emacs installed on your computer, you can access a version of Doctor with M-X doctor. Otherwise, browse to Eliza, Computer Therapist if you don't mind having a potentially intimate conversation with something hosted on a public website. (Or simply download the page -- it's written in Javascript.)

Resources

Another fine post from The Computer Curmudgeon (also at computer-curmudgeon.com).
Donation buttons in profile.

mdlbear: blue fractal bear with text "since 2002" (Default)

Considering that last week included Ame's 30th birthday and the 75th anniversary of Hiroshima and Nagasaki (marked by re-reading John Hersey's book), I'd have to say that it wasn't nearly as bad a week as it could have been. Leaving me free to worry about Colleen's ongoing health problems and the US's ongoing descent into tyranny. We may have a handle on solving the first of those. I'm rapidly losing hope about the second. I don't think I've worried much about the Earth's ongoing climate changes all week. So there's that.

This is not the future I ordered. How do I send it back and get a replacement? Alternatively, I'd really like to wake up now.

I spent most of yesterday working on improved documentation for FlkTex. That will hopefully result in a couple of posts some time in the near future.

The links about Ame and Hiroshima should probably be approached with caution; there's a fair amount of quoting for context.

Notes & links, as usual )

mdlbear: portrait of me holding a guitar, by Kelly Freas (freas)

I was kind of at a loss for which song to write about today, but then somebody posted a link to Apollo 13 in Real Time. Fifty years ago today. So there's really only one song that fits: The Ballad of Apollo XIII.

The link goes to a music video somebody pieced together over Julia Ecklar's track from Minus Ten and Counting: Songs of the Space Age (the linked page includes YouTube videos of all the tracks). You'll find the lyrics, posted by the songwriter, in this comment.

I don't have anything more to add.

mdlbear: (technonerdmonster)

Most humans multitask rather badly -- studies have shown that when one tries to do two tasks at the same time, both tasks suffer. That's why many states outlaw using a cell phone while driving. Some people are much better than others at switching between tasks, especially similar tasks, and so give the appearance of multitasking. There is still a cost to switching context, though. The effect is much less if one of the tasks requires very little attention, knitting during a conversation, or sipping coffee while programming. (Although I have noticed that if I get deeply involved in a programming project my coffee tends to get cold.) It may surprise you to learn that computers have the same problem.

Your computer isn't really responding to your keystrokes and mouse clicks, playing a video from YouTube in one window while running a word processor in another, copying a song to a thumb drive, fetching pages from ten different web sites, and downloading the next Windows update, all at the same time. It's just faking it by switching between tasks really fast. (That's only partially true. We'll get to that part later, so if you already know about multi-core processors and GPUs, please be patient. Or skip ahead. Like a computer, my output devices can only type one character at a time.)

Back when computers weighed thousands of pounds, cost millions of dollars, and were about a million times slower than they are now, people started to notice that their expensive machines were idle a lot of the time -- they were waiting for things to happen in the "real world", and when the computer was reading in the next punched card it wasn't getting much else done. As computers got faster -- and cheaper -- the effect grew more and more noticable, until some people realized that they could make use of that idle time to get something else done. The first operating systems that did this were called "foreground/background" systems -- they used the time when the computer was waiting for I/O to switch to a background task that did something that did a lot of computation and not much I/O.

Once when I was in college I took advantage of the fact that the school's IBM 1620 was just sitting there most of the night to write a primitive foreground/background OS that consisted of just two instructions and a sign. The instructions dumped the computer's memory onto punched cards and then halted. The sign told whoever wanted to use the computer to flip a switch, wait for the dump to be punched out, and load it back in when they were done with whatever they were doing. I got a solid week of computation done. (It would take much less than a second on your laptop or even your phone, but we had neither laptop computers nor cell phones in 1968.)

By the end of the 1950s computers were getting fast enough, and had enough memory, that people could see where things were headed, and several people wrote papers describing how one could time-share a large, fast computer among several people to give them each the illusion that they had a (perhaps somewhat less powerful) computer all to themselves. The users would type programs on a teletype machine or some other glorified typewriter, and since it takes a long time for someone to type in a program or make a change to it, the computer had plenty of time to do actual work. The first such systems were demonstrated in 1961.

I'm going to skip over a lot of the history, including minicomputers, which were cheap enough that small colleges could afford them (Carleton got a PDP-8 the year after I graduated). Instead, I'll say a little about how timesharing actually works.

A computer's operating system is there to manage resources, and in a timesharing OS the goal is to manage them fairly, and switch contexts quickly enough for users to think that they're using the whole machine by themselves. There are three main resources to manage: time (on the CPU), space (memory), and attention (all those users typing at their keyboards).

There are two ways to manage attention: polling all of the attached devices to see which ones have work to do, and letting the devices interrupt whatever was going on. If only a small number of devices need attention, it's a lot more efficient to let them interrupt the processor, so that's how almost everything works these days.

When an interrupt comes in, the computer has to save whatever it was working on, do whatever work is required, and then put things back the way they were and get back to what it was doing before. This takes time. So does writing about it, so I'll just mention it briefly before getting back to the interesting stuff.

See what I did there? This is a lot like what I'm doing writing this post, occasionally switching tasks to eat lunch, go shopping, sleep, read other blogs, or pet the cat that suddenly sat on my keyboard demanding attention.

Let's look at time next. The computer can take advantage of the fact that many programs perform I/O to use the time when it's waiting for an I/O operation to finish to look around and see whether there's another program waiting to run. Another good time to switch is when an interrupt comes in -- the program's state already has to be saved to handle the interrupt. There's a bit of a problem with programs that don't do I/O -- these days they're usually mining bitcoin. So there's a clock that generates an interrupt every so often. In the early days that used to be 60 times per second (50 in Britain); a sixtieth of a second was sometimes called a "jiffy". That way of managing time is often called "time-slicing".

The other way of managing time is multiprocessing: using more than one computer at the same time. (Told you I'd get to that eventually.) The amount of circuitry you can put on a chip keeps increasing, but the amount of circuitry required to make a CPU (a computer's Central Processing Unit) stays pretty much the same. The natural thing to do is to add another CPU. That's the point at which CPUs on a chip started being called "cores"; multi-core chips started hitting the consumer market around the turn of the millennium.

There is a complication that comes in when you have more than one CPU, and that's keeping them from getting in one another's way. Think about what happens when you and your family are making a big Thanksgiving feast in your kitchen. Even if it's a pretty big kitchen and everyone's working on a different part of the counter, you're still occasionally going to have times when more than one person needs to use the sink or the stove or the fridge. When this happens, you have to take turns or risk stepping on one another's toes.

You might think that the simplest way to do that is to run a completely separate program on each core. That works until you have more programs than processors, and it happens sooner than you might think because many programs need to do more than one thing at a time. Your web browser, for example, starts a new process every time you open a tab. (I am not going to discuss the difference between programs, processes, and threads in this post. I'm also not going to discuss locking, synchronization, and scheduling. Maybe later.)

The other thing you can do is to start adding specialized processors for offloading the more compute-intensive tasks. For a long time that meant graphics -- a modern graphics card has more compute power than the computer it's attached to, because the more power you throw at making pretty pictures, the better they look. Realistic-looking images used to take hours to compute. In 1995 the first computer-animated feature film, Toy Story, was produced on a fleet of 117 Sun Microsystems computers running around the clock. They got about three minutes of movie per week.

Even a mediocre graphics card can generate better-quality images at 75 frames per second. It's downright scary. In fairness, most of that performance comes from specialization. Rather than being general-purpose computers, graphics cards mostly just do the computations required for simulating objects moving around in three dimensions.

The other big problem, in more ways than one, is space. Programs use memory, both for code and for data. In the early days of timesharing, if a program was ready to run that didn't fit in the memory available, some other program got "swapped out" onto disk. All of it. Of course, memory wasn't all that big at the time -- a megabyte was considered a lot of memory in those days -- but it still took a lot of time.

Eventually, however, someone hit on the idea of splitting memory up into equal-sized chunks called "pages". A program doesn't use all of its memory at once, and most operations tend to be pretty localized. So a program runs until it needs a page that isn't in memory. The operating system then finds some other page to evict -- usually one that hasn't been used for a while. The OS writes out the old page (if it has to; if it hasn't been modified and it's still around in swap space, you win), and schedules the I/O operation needed to read the new page in. And because that take a while, it goes off and runs some other program while it's waiting.

There's a complication, of course: you need to keep track of where each page is in what its program thinks of as a very simple sequence of consecutive memory locations. That means you need a "page table" or "memory map" to keep track of the correspondence between the pages scattered around the computer's real memory, and the simple virtual memory that the program thinks it has.

There's another complication: it's perfectly possible (and sometimes useful) for a program to allocate more virtual memory than the computer has space for in real memory. And it's even easier to have a collection of programs that, between them, take up more space than you have.

As long as each program only uses a few separate regions of its memory at a time, you can get away with it. The memory that a program needs at any given time is called its "working set", and with most programs it's pretty small and doesn't jump around too much. But not every program is this well-behaved, and sometimes even when they are there can be too many of them. At that point you're in trouble. Even if there is plenty of swap space, there isn't enough real memory for every program to get their whole working set swapped in. At that point the OS is frantically swapping pages in and out, and things slow down to a crawl. It's called "thrashing". You may have noticed this when you have too many browser tabs open.

The only things you can do when that happens are to kill some large programs (Firefox is my first target these days), or re-boot. (When you restart, even if your browser restores its session to the tabs you had open when you stopped it, you're not in trouble again because it only starts a new process when you look at a tab.)

And at this point, I'm going to stop because I think I've rambled far enough. Please let me know what you think of it. And let me know which parts I ought to expand on in later posts. Also, tell me if I need to cut-tag it.

Another fine post from The Computer Curmudgeon (also at computer-curmudgeon.com). If you found it interesting or useful, you might consider using one of the donation buttons on my profile page.

NaBloPoMo stats:
   8632 words in 13 posts this month (average 664/post)
   2035 words in 1 post today

mdlbear: Wild turkey hen close-up (turkey)

Well, gratitude is good no matter what day it happens on. Though 9/11 is one of the worse days for it.

  • Still, the terrorist attacks 14 years ago gave me an opportunity to take a cheap flight to Ohio for my first OVFF. So there's that.
  • I'm also grateful for my family. I'm not saying they keep me sane, but they do keep the craziness from getting completely self-destructive.
  • My cane deserves a mention. Even when my back and knees are almost recovered, it helps. If only to give me something to lean on if I have to stand up, and a seat on the bus so I don't have to stand on something that's moving.
  • And of course continuing employment, along with an increase in productivity.
  • And finally, fervent thanks that things are not as bad as they could be.
mdlbear: blue fractal bear with text "since 2002" (Default)

The main news this week is that my Mom had open-heart surgery Tuesday morning. They replaced her mitral valve, and repaired another (which wasn't in the original plan, so it went longer than expected). She was in really bad shape when my brother drove her to the hospital in the morning, and there was some debate as to whether they should do the surgery. She's 93.

We needn't have worried. They had her up and walking the next day; she called me on Wednesday sounding like her old self, and she's bouncing back much faster than her doctors expected. I'm not surprised; Mom's amazing, and she keeps on proving it.

The moon landing was 45 years ago last Sunday. Sad -- we were all sure there would be lunar colonies by now. Not to mention flying cars, robots, artifical intelligence, and free single-payer health care for everyone in the US.

Lots of good links in the notes.

raw notes, with links )

45

2014-07-20 06:34 pm
mdlbear: blue fractal bear with text "since 2002" (Default)

If I remember correctly, I watched the moon landing on the TV in the lounge at the Stanford AI Lab, 45 years ago today. It was the start of my first year of grad school.

I missed my 45th reunion at Carleton a few weeks ago. IIRC I went to my 25th, but it might have been my 30th.

My 50th high school reunion is next year.

I don't think I count as middle-aged anymore.

mdlbear: (distress)

Today in history: this country suffered a major defeat in the Battle of Brandywine, September 11, 1777.

The battle, which was a decisive victory for the British, left Philadelphia, the revolutionary capital, undefended. The British captured the city on September 26, beginning an occupation that would last until June 1778.

mdlbear: blue fractal bear with text "since 2002" (Default)
raw notes )

So, once again, I have managed to overlook a day's worth of notes and end up having to post them out of sequence. This doesn't bother me too much, but it bothers me.

Fortunately, my Hiroshima Day post was done separately. That may have been what threw off my reckoning, actually.

My headache, etc., came back, leading me to speculate that it takes a couple of days for the methocarbomol and naproxen to build up a sufficient concentration. My doctor confirmed it this afternoon, diagnosing it as a trapezius muscle strain. Did I mention that it hurts?

I also bought what turned out to be a Manhasset table top music stand. Very light weight. With a minor application of vice grips, it attaches nicely to a mic stand quick-connect... and fits nicely in my checked suitcase.

I did quite a lot of puttering, of various sorts.

A pretty good day, actually. Links up in the notes, as usual.

mdlbear: blue fractal bear with text "since 2002" (Default)

[livejournal.com profile] ysabetwordsmith links to the various NASA anniversaries.

The first moon landing took place in 1969, 41 years ago. I had just graduated from college, and moved to the Bay Area for graduate school.

One of my best friends wasn't even born yet.

mdlbear: (g15-meters)
Multics
Overview

Multics (Multiplexed Information and Computing Service) was a mainframe timesharing operating system that began at MIT as a research project in 1965. It was an important influence on operating system development.
History of Multics

The plan for Multics was presented to the 1965 Fall Joint Computer Conference in a series of six papers. It was a joint project with M.I.T., General Electric, and Bell Labs. Bell Labs dropped out in 1969, and in 1970 GE's computer business, including Multics, was taken over by Honeywell (now Bull).

MIT's Multics research began in 1964, led by Professor Fernando J. Corbató at MIT Project MAC, which later became the MIT Laboratory for Computer Science (LCS) and then Computer Science And Artificial Intelligence Laboratory (CSAIL). Starting in 1969, Multics was provided as a campus-wide information service by the MIT Information Processing Services organization, serving thousands of academic and administrative users.

Multics was conceived as a general purpose time-sharing utility. It would be a commercial product for GE, which sold time-sharing services. It became a GE and then Honeywell product. About 85 sites ran Multics. However, it had a powerful impact in the computer field, due to its many novel and valuable ideas.

Since it was designed to be a utility, such as electricity and telephone services, it had numerous features to provide high availability and security. Both the hardware and software were highly modular so that the system could grow in size by adding more of the appropriate resource even while the service was running. Since services were shared by users who might not trust each other, security was a major feature with file sharing provided at the file level via access controls. For more information, see: Wikipedia's Multics: Novel Ideas

LCS research on Multics ended in the late 1970s, and Bull ended Multics development in 1985. MIT shut down its Multics service in 1988. The last Multics system was deactivated in 2000.

Multics Source and Documentation

In order to preserve the ideas and innovations that made Multics so important in the development of computer systems, Bull HN has provided the source code for the final Multics release, MR 12.5 of November 1992 to MIT. It is a generous contribution to computer science knowledge and is provided for academic purposes. Additionally, we intend this site to become a repository for many papers and documents that were created during the Multics development as a complement to the other Multics sites.

Multics Source and Listings
That last link says it all. There are many ideas in Multics that are still being re-invented incorrectly today. If you have any interest at all in the architecture and history of computer systems, go read it.
mdlbear: blue fractal bear with text "since 2002" (Default)
CONELRAD | DAISY: THE COMPLETE HISTORY OF AN INFAMOUS AND ICONIC AD - PART ONE
Every election season when politicians unleash their expensive and (usually) unimaginative attack ads, op-ed writers invoke the unofficial title of the most notorious 60 seconds in advertising history: "The Daisy Ad" (official title: "Peace, Little Girl," aka "Daisy Girl," "The Daisy Spot, "aka "Little Girl – Countdown"). The spot features a little girl picking petals off of a daisy in a field and counting out of sequence just before an adult voiceover interjects a "military" countdown which is then followed by stock footage of a nuclear explosion and the cautionary words of President Lyndon B. Johnson: "These are the stakes – to make a world in which all of God's children can live, or to go into the dark. We must either love each other, or we must die." The ad – which never identifies its target – was aimed at reinforcing the perception that the 1964 Republican candidate for president, Senator Barry M. Goldwater, could not be trusted with his finger on the button. Title screen from 'David and Bathsheba'As has often been recited, the Daisy ad aired only once as a paid advertisement – on NBC during the network movie (DAVID AND BATHSHEBA) on Monday, September 7, 1964. Since that long ago Labor Day, the film of the child and her daisies has been re-played millions of times.
(Via BoingBoing, of course.)

I've seen it. It was very effective.
mdlbear: (space colony)

July 20, 1969.

By coincidence, July 20 was the original due date for our first child, which is why [livejournal.com profile] chaoswolf's middle name is Diana. She decided to arrive early, though, which is why she celebrates her birthday during Westercon.

This year, I'm celebrating by burning a disk which I hope is epsilon away from a master for Coffee, Computers, and Song! (It's still available for preorder for the next few weeks. After it's real I'll have to start charging sales tax and shipping.)

Update: I had to dash to get to a meeting at work, but about 1/2 hour after posting this I put a call in to Oasis and set the wheels in motion. I'm feeling much better about the schedule since discovering that I can get the project fast-tracked for only an extra $200. Disks at ConChord are looking at least possible, if not inevitable.

mdlbear: (g15-meters)

Just finished my final panel, on computer history. The program book blurb made it sound like it was mostly about Moore's Law and the way computers have evolved from the last century to the present, but in fact it was the usual bunch of old fogies reminiscing about the way things used to be in the good old days when men were men and transistors were germanium.

Fun, and I didn't have to moderate it, so I'm happy. It's been a good con, but now it's time to go home and take a day's worth of vacation.

mdlbear: (ccs-cover)
IBM 1401 Mainframe, the Musical
When IBM chief maintenance engineer Jóhann Gunnarsson started tinkering with the IBM 1401 Data Processing System, believed to have been the first computer to arrive in his native Iceland in 1964, he noticed an electromagnetic leak from the machine's memory caused a deep, cellolike hum to come from nearby AM radios.

It was a production defect but, captivated, amateur musician Gunnarson and his colleagues soon learned how to reprogram the room-size business workhorse's innards to emit melodies that rank amongst the earliest in a long line of Scandinavian digital music.

Fast-forward four decades, and recently discovered tape recordings of Gunnarson's works form the basis of a touring song-and-dance performance, IBM 1401: A User's Manual. The show was composed by Gunnarson's son Jóhann Jóhannsson, with interpretive dance choreographed by Erna Omarsdotti, whose father is another IBM alum.
But never mind that. There's a video clip at the end of the article. It's boring.

The really cool thing about this article is the link to this web site of IBM 1401 movies and sounds. It includes sound clips of music played on the 1401's chain printer, and a link to Movies-n-Sounds of Antique Computers. In particular, this awesome movie of an IBM 650 starting up, and an audio clip of the 650's drum spinning up.

Actually, I came fairly close synthesizing that in Audacity, as you can hear in Vampire Megabyte
[ogg] [mp3], available soon on my upcoming CD, Coffee, Computers and Song.
mdlbear: (g15-meters)
Bendix G-15 - Wikipedia, the free encyclopedia
The Bendix G-15 computer was introduced in 1956 by the Bendix Corporation, Computer Division, Los Angeles, California. It was about 5 by 3 by 3 feet and weighed about 950 pounds. The base system, without peripherals, cost $49,500. A working model cost around $60,000. It could also be rented for $1,485 per month. It was meant for scientific and industrial markets. The series was gradually discontinued when Control Data Corporation took over the Bendix computer division in 1963.

The chief designer of the G-15 was Harry Huskey, who had worked with Alan Turing on the ACE in the United Kingdom and on the SWAC in the 1950s. He made most of the design while working as a professor at Berkeley, and other universities. David C. Evans was one of the Bendix engineers on the G-15 project. He would later become famous for his work in computer graphics and for starting up Evans & Sutherland with Ivan Sutherland.
The icon is a close-up of the meters on the front panel; they allowed the operator to adjust the power-supply voltages until the vacuum tubes were happy. The image it was ganked from was found here.

image behind cut )
mdlbear: (hacker glider)
When we got our hotel room last night I immediately recognized our room number, 1403, as the number of the printer associated with the IBM 1401 computer.
mdlbear: (chernobyl bunny)
Atomic bombings of Hiroshima and Nagasaki - Wikipedia, the free encyclopedia
On the morning of August 6, 1945 the United States Army Air Forces dropped the nuclear weapon "Little Boy" on the city of Hiroshima, followed three days later by the detonation of the "Fat Man" bomb over Nagasaki, Japan
mdlbear: (sureal time)
Boing Boing: Use of term "flash mob" dates back to 1800s Tasmania?

Of course, neither "flash" nor "mob" meant the same then as it does now; "flash" refered to a style of dress.
image; click for original (larger) version )
mdlbear: (hacker glider)

Just finished reading What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry by John Markoff. It was a gift from Smalltalk hacker and former roommate Ted Kaehler (he gets a brief mention in Chapter 7). What's amazing about it is how many of the people mentioned in it I've met, and in many cases worked with. (Of course, having been at SAIL, Xerox PARC, and later at Zilog helps.) My wife the [livejournal.com profile] flower_cat had a similar experience; her mother was a technical writer and editor at SRI during the '60s and '70s.

It's kind of sad. There was incredible optimism in those days -- personal computers were coming, and they were going to remake society. Revolution was in the air, and computers were right there on the barricades along with sex, drugs, and rock-and-roll. The night-owl hackers at SAIL, the Peoples' Computer Company with its Wednesday potlucks, the Homebrew Computer Club meetings at SLAC (a short walk down Sand Hill from where I work these days) -- they're all gone now. I knew it was over when the 6th West Coast Computer Faire had more suits than freaks; the war Bill Gates started with his "Open Letter to Hobbyists" -- mentioned in the last chapter, and reproduced in full as the last illustration -- is still going on, and it still isn't clear who's winning.

If you'll excuse me, I'm going to crawl off to my corner and wallow in nostalgia for a while.

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated 2025-07-03 03:14 pm
Powered by Dreamwidth Studios