mdlbear: (technonerdmonster)

Most humans multitask rather badly -- studies have shown that when one tries to do two tasks at the same time, both tasks suffer. That's why many states outlaw using a cell phone while driving. Some people are much better than others at switching between tasks, especially similar tasks, and so give the appearance of multitasking. There is still a cost to switching context, though. The effect is much less if one of the tasks requires very little attention, knitting during a conversation, or sipping coffee while programming. (Although I have noticed that if I get deeply involved in a programming project my coffee tends to get cold.) It may surprise you to learn that computers have the same problem.

Your computer isn't really responding to your keystrokes and mouse clicks, playing a video from YouTube in one window while running a word processor in another, copying a song to a thumb drive, fetching pages from ten different web sites, and downloading the next Windows update, all at the same time. It's just faking it by switching between tasks really fast. (That's only partially true. We'll get to that part later, so if you already know about multi-core processors and GPUs, please be patient. Or skip ahead. Like a computer, my output devices can only type one character at a time.)

Back when computers weighed thousands of pounds, cost millions of dollars, and were about a million times slower than they are now, people started to notice that their expensive machines were idle a lot of the time -- they were waiting for things to happen in the "real world", and when the computer was reading in the next punched card it wasn't getting much else done. As computers got faster -- and cheaper -- the effect grew more and more noticable, until some people realized that they could make use of that idle time to get something else done. The first operating systems that did this were called "foreground/background" systems -- they used the time when the computer was waiting for I/O to switch to a background task that did something that did a lot of computation and not much I/O.

Once when I was in college I took advantage of the fact that the school's IBM 1620 was just sitting there most of the night to write a primitive foreground/background OS that consisted of just two instructions and a sign. The instructions dumped the computer's memory onto punched cards and then halted. The sign told whoever wanted to use the computer to flip a switch, wait for the dump to be punched out, and load it back in when they were done with whatever they were doing. I got a solid week of computation done. (It would take much less than a second on your laptop or even your phone, but we had neither laptop computers nor cell phones in 1968.)

By the end of the 1950s computers were getting fast enough, and had enough memory, that people could see where things were headed, and several people wrote papers describing how one could time-share a large, fast computer among several people to give them each the illusion that they had a (perhaps somewhat less powerful) computer all to themselves. The users would type programs on a teletype machine or some other glorified typewriter, and since it takes a long time for someone to type in a program or make a change to it, the computer had plenty of time to do actual work. The first such systems were demonstrated in 1961.

I'm going to skip over a lot of the history, including minicomputers, which were cheap enough that small colleges could afford them (Carleton got a PDP-8 the year after I graduated). Instead, I'll say a little about how timesharing actually works.

A computer's operating system is there to manage resources, and in a timesharing OS the goal is to manage them fairly, and switch contexts quickly enough for users to think that they're using the whole machine by themselves. There are three main resources to manage: time (on the CPU), space (memory), and attention (all those users typing at their keyboards).

There are two ways to manage attention: polling all of the attached devices to see which ones have work to do, and letting the devices interrupt whatever was going on. If only a small number of devices need attention, it's a lot more efficient to let them interrupt the processor, so that's how almost everything works these days.

When an interrupt comes in, the computer has to save whatever it was working on, do whatever work is required, and then put things back the way they were and get back to what it was doing before. This takes time. So does writing about it, so I'll just mention it briefly before getting back to the interesting stuff.

See what I did there? This is a lot like what I'm doing writing this post, occasionally switching tasks to eat lunch, go shopping, sleep, read other blogs, or pet the cat that suddenly sat on my keyboard demanding attention.

Let's look at time next. The computer can take advantage of the fact that many programs perform I/O to use the time when it's waiting for an I/O operation to finish to look around and see whether there's another program waiting to run. Another good time to switch is when an interrupt comes in -- the program's state already has to be saved to handle the interrupt. There's a bit of a problem with programs that don't do I/O -- these days they're usually mining bitcoin. So there's a clock that generates an interrupt every so often. In the early days that used to be 60 times per second (50 in Britain); a sixtieth of a second was sometimes called a "jiffy". That way of managing time is often called "time-slicing".

The other way of managing time is multiprocessing: using more than one computer at the same time. (Told you I'd get to that eventually.) The amount of circuitry you can put on a chip keeps increasing, but the amount of circuitry required to make a CPU (a computer's Central Processing Unit) stays pretty much the same. The natural thing to do is to add another CPU. That's the point at which CPUs on a chip started being called "cores"; multi-core chips started hitting the consumer market around the turn of the millennium.

There is a complication that comes in when you have more than one CPU, and that's keeping them from getting in one another's way. Think about what happens when you and your family are making a big Thanksgiving feast in your kitchen. Even if it's a pretty big kitchen and everyone's working on a different part of the counter, you're still occasionally going to have times when more than one person needs to use the sink or the stove or the fridge. When this happens, you have to take turns or risk stepping on one another's toes.

You might think that the simplest way to do that is to run a completely separate program on each core. That works until you have more programs than processors, and it happens sooner than you might think because many programs need to do more than one thing at a time. Your web browser, for example, starts a new process every time you open a tab. (I am not going to discuss the difference between programs, processes, and threads in this post. I'm also not going to discuss locking, synchronization, and scheduling. Maybe later.)

The other thing you can do is to start adding specialized processors for offloading the more compute-intensive tasks. For a long time that meant graphics -- a modern graphics card has more compute power than the computer it's attached to, because the more power you throw at making pretty pictures, the better they look. Realistic-looking images used to take hours to compute. In 1995 the first computer-animated feature film, Toy Story, was produced on a fleet of 117 Sun Microsystems computers running around the clock. They got about three minutes of movie per week.

Even a mediocre graphics card can generate better-quality images at 75 frames per second. It's downright scary. In fairness, most of that performance comes from specialization. Rather than being general-purpose computers, graphics cards mostly just do the computations required for simulating objects moving around in three dimensions.

The other big problem, in more ways than one, is space. Programs use memory, both for code and for data. In the early days of timesharing, if a program was ready to run that didn't fit in the memory available, some other program got "swapped out" onto disk. All of it. Of course, memory wasn't all that big at the time -- a megabyte was considered a lot of memory in those days -- but it still took a lot of time.

Eventually, however, someone hit on the idea of splitting memory up into equal-sized chunks called "pages". A program doesn't use all of its memory at once, and most operations tend to be pretty localized. So a program runs until it needs a page that isn't in memory. The operating system then finds some other page to evict -- usually one that hasn't been used for a while. The OS writes out the old page (if it has to; if it hasn't been modified and it's still around in swap space, you win), and schedules the I/O operation needed to read the new page in. And because that take a while, it goes off and runs some other program while it's waiting.

There's a complication, of course: you need to keep track of where each page is in what its program thinks of as a very simple sequence of consecutive memory locations. That means you need a "page table" or "memory map" to keep track of the correspondence between the pages scattered around the computer's real memory, and the simple virtual memory that the program thinks it has.

There's another complication: it's perfectly possible (and sometimes useful) for a program to allocate more virtual memory than the computer has space for in real memory. And it's even easier to have a collection of programs that, between them, take up more space than you have.

As long as each program only uses a few separate regions of its memory at a time, you can get away with it. The memory that a program needs at any given time is called its "working set", and with most programs it's pretty small and doesn't jump around too much. But not every program is this well-behaved, and sometimes even when they are there can be too many of them. At that point you're in trouble. Even if there is plenty of swap space, there isn't enough real memory for every program to get their whole working set swapped in. At that point the OS is frantically swapping pages in and out, and things slow down to a crawl. It's called "thrashing". You may have noticed this when you have too many browser tabs open.

The only things you can do when that happens are to kill some large programs (Firefox is my first target these days), or re-boot. (When you restart, even if your browser restores its session to the tabs you had open when you stopped it, you're not in trouble again because it only starts a new process when you look at a tab.)

And at this point, I'm going to stop because I think I've rambled far enough. Please let me know what you think of it. And let me know which parts I ought to expand on in later posts. Also, tell me if I need to cut-tag it.

Another fine post from The Computer Curmudgeon (also at computer-curmudgeon.com). If you found it interesting or useful, you might consider using one of the donation buttons on my profile page.

NaBloPoMo stats:
   8632 words in 13 posts this month (average 664/post)
   2035 words in 1 post today

mdlbear: Wild turkey hen close-up (turkey)

Well, gratitude is good no matter what day it happens on. Though 9/11 is one of the worse days for it.

  • Still, the terrorist attacks 14 years ago gave me an opportunity to take a cheap flight to Ohio for my first OVFF. So there's that.
  • I'm also grateful for my family. I'm not saying they keep me sane, but they do keep the craziness from getting completely self-destructive.
  • My cane deserves a mention. Even when my back and knees are almost recovered, it helps. If only to give me something to lean on if I have to stand up, and a seat on the bus so I don't have to stand on something that's moving.
  • And of course continuing employment, along with an increase in productivity.
  • And finally, fervent thanks that things are not as bad as they could be.
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

The main news this week is that my Mom had open-heart surgery Tuesday morning. They replaced her mitral valve, and repaired another (which wasn't in the original plan, so it went longer than expected). She was in really bad shape when my brother drove her to the hospital in the morning, and there was some debate as to whether they should do the surgery. She's 93.

We needn't have worried. They had her up and walking the next day; she called me on Wednesday sounding like her old self, and she's bouncing back much faster than her doctors expected. I'm not surprised; Mom's amazing, and she keeps on proving it.

The moon landing was 45 years ago last Sunday. Sad -- we were all sure there would be lunar colonies by now. Not to mention flying cars, robots, artifical intelligence, and free single-payer health care for everyone in the US.

Lots of good links in the notes.

raw notes, with links )

45

2014-07-20 06:34 pm
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

If I remember correctly, I watched the moon landing on the TV in the lounge at the Stanford AI Lab, 45 years ago today. It was the start of my first year of grad school.

I missed my 45th reunion at Carleton a few weeks ago. IIRC I went to my 25th, but it might have been my 30th.

My 50th high school reunion is next year.

I don't think I count as middle-aged anymore.

mdlbear: (distress)

Today in history: this country suffered a major defeat in the Battle of Brandywine, September 11, 1777.

The battle, which was a decisive victory for the British, left Philadelphia, the revolutionary capital, undefended. The British captured the city on September 26, beginning an occupation that would last until June 1778.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
raw notes )

So, once again, I have managed to overlook a day's worth of notes and end up having to post them out of sequence. This doesn't bother me too much, but it bothers me.

Fortunately, my Hiroshima Day post was done separately. That may have been what threw off my reckoning, actually.

My headache, etc., came back, leading me to speculate that it takes a couple of days for the methocarbomol and naproxen to build up a sufficient concentration. My doctor confirmed it this afternoon, diagnosing it as a trapezius muscle strain. Did I mention that it hurts?

I also bought what turned out to be a Manhasset table top music stand. Very light weight. With a minor application of vice grips, it attaches nicely to a mic stand quick-connect... and fits nicely in my checked suitcase.

I did quite a lot of puttering, of various sorts.

A pretty good day, actually. Links up in the notes, as usual.

mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)

[livejournal.com profile] ysabetwordsmith links to the various NASA anniversaries.

The first moon landing took place in 1969, 41 years ago. I had just graduated from college, and moved to the Bay Area for graduate school.

One of my best friends wasn't even born yet.

mdlbear: (g15-meters)
Multics
Overview

Multics (Multiplexed Information and Computing Service) was a mainframe timesharing operating system that began at MIT as a research project in 1965. It was an important influence on operating system development.
History of Multics

The plan for Multics was presented to the 1965 Fall Joint Computer Conference in a series of six papers. It was a joint project with M.I.T., General Electric, and Bell Labs. Bell Labs dropped out in 1969, and in 1970 GE's computer business, including Multics, was taken over by Honeywell (now Bull).

MIT's Multics research began in 1964, led by Professor Fernando J. Corbató at MIT Project MAC, which later became the MIT Laboratory for Computer Science (LCS) and then Computer Science And Artificial Intelligence Laboratory (CSAIL). Starting in 1969, Multics was provided as a campus-wide information service by the MIT Information Processing Services organization, serving thousands of academic and administrative users.

Multics was conceived as a general purpose time-sharing utility. It would be a commercial product for GE, which sold time-sharing services. It became a GE and then Honeywell product. About 85 sites ran Multics. However, it had a powerful impact in the computer field, due to its many novel and valuable ideas.

Since it was designed to be a utility, such as electricity and telephone services, it had numerous features to provide high availability and security. Both the hardware and software were highly modular so that the system could grow in size by adding more of the appropriate resource even while the service was running. Since services were shared by users who might not trust each other, security was a major feature with file sharing provided at the file level via access controls. For more information, see: Wikipedia's Multics: Novel Ideas

LCS research on Multics ended in the late 1970s, and Bull ended Multics development in 1985. MIT shut down its Multics service in 1988. The last Multics system was deactivated in 2000.

Multics Source and Documentation

In order to preserve the ideas and innovations that made Multics so important in the development of computer systems, Bull HN has provided the source code for the final Multics release, MR 12.5 of November 1992 to MIT. It is a generous contribution to computer science knowledge and is provided for academic purposes. Additionally, we intend this site to become a repository for many papers and documents that were created during the Multics development as a complement to the other Multics sites.

Multics Source and Listings
That last link says it all. There are many ideas in Multics that are still being re-invented incorrectly today. If you have any interest at all in the architecture and history of computer systems, go read it.
mdlbear: the positively imaginary half of a cubic mandelbrot set (Default)
CONELRAD | DAISY: THE COMPLETE HISTORY OF AN INFAMOUS AND ICONIC AD - PART ONE
Every election season when politicians unleash their expensive and (usually) unimaginative attack ads, op-ed writers invoke the unofficial title of the most notorious 60 seconds in advertising history: "The Daisy Ad" (official title: "Peace, Little Girl," aka "Daisy Girl," "The Daisy Spot, "aka "Little Girl – Countdown"). The spot features a little girl picking petals off of a daisy in a field and counting out of sequence just before an adult voiceover interjects a "military" countdown which is then followed by stock footage of a nuclear explosion and the cautionary words of President Lyndon B. Johnson: "These are the stakes – to make a world in which all of God's children can live, or to go into the dark. We must either love each other, or we must die." The ad – which never identifies its target – was aimed at reinforcing the perception that the 1964 Republican candidate for president, Senator Barry M. Goldwater, could not be trusted with his finger on the button. Title screen from 'David and Bathsheba'As has often been recited, the Daisy ad aired only once as a paid advertisement – on NBC during the network movie (DAVID AND BATHSHEBA) on Monday, September 7, 1964. Since that long ago Labor Day, the film of the child and her daisies has been re-played millions of times.
(Via BoingBoing, of course.)

I've seen it. It was very effective.
mdlbear: (space colony)

July 20, 1969.

By coincidence, July 20 was the original due date for our first child, which is why [livejournal.com profile] chaoswolf's middle name is Diana. She decided to arrive early, though, which is why she celebrates her birthday during Westercon.

This year, I'm celebrating by burning a disk which I hope is epsilon away from a master for Coffee, Computers, and Song! (It's still available for preorder for the next few weeks. After it's real I'll have to start charging sales tax and shipping.)

Update: I had to dash to get to a meeting at work, but about 1/2 hour after posting this I put a call in to Oasis and set the wheels in motion. I'm feeling much better about the schedule since discovering that I can get the project fast-tracked for only an extra $200. Disks at ConChord are looking at least possible, if not inevitable.

mdlbear: (g15-meters)

Just finished my final panel, on computer history. The program book blurb made it sound like it was mostly about Moore's Law and the way computers have evolved from the last century to the present, but in fact it was the usual bunch of old fogies reminiscing about the way things used to be in the good old days when men were men and transistors were germanium.

Fun, and I didn't have to moderate it, so I'm happy. It's been a good con, but now it's time to go home and take a day's worth of vacation.

mdlbear: (ccs-cover)
IBM 1401 Mainframe, the Musical
When IBM chief maintenance engineer Jóhann Gunnarsson started tinkering with the IBM 1401 Data Processing System, believed to have been the first computer to arrive in his native Iceland in 1964, he noticed an electromagnetic leak from the machine's memory caused a deep, cellolike hum to come from nearby AM radios.

It was a production defect but, captivated, amateur musician Gunnarson and his colleagues soon learned how to reprogram the room-size business workhorse's innards to emit melodies that rank amongst the earliest in a long line of Scandinavian digital music.

Fast-forward four decades, and recently discovered tape recordings of Gunnarson's works form the basis of a touring song-and-dance performance, IBM 1401: A User's Manual. The show was composed by Gunnarson's son Jóhann Jóhannsson, with interpretive dance choreographed by Erna Omarsdotti, whose father is another IBM alum.
But never mind that. There's a video clip at the end of the article. It's boring.

The really cool thing about this article is the link to this web site of IBM 1401 movies and sounds. It includes sound clips of music played on the 1401's chain printer, and a link to Movies-n-Sounds of Antique Computers. In particular, this awesome movie of an IBM 650 starting up, and an audio clip of the 650's drum spinning up.

Actually, I came fairly close synthesizing that in Audacity, as you can hear in Vampire Megabyte
[ogg] [mp3], available soon on my upcoming CD, Coffee, Computers and Song.
mdlbear: (g15-meters)
Bendix G-15 - Wikipedia, the free encyclopedia
The Bendix G-15 computer was introduced in 1956 by the Bendix Corporation, Computer Division, Los Angeles, California. It was about 5 by 3 by 3 feet and weighed about 950 pounds. The base system, without peripherals, cost $49,500. A working model cost around $60,000. It could also be rented for $1,485 per month. It was meant for scientific and industrial markets. The series was gradually discontinued when Control Data Corporation took over the Bendix computer division in 1963.

The chief designer of the G-15 was Harry Huskey, who had worked with Alan Turing on the ACE in the United Kingdom and on the SWAC in the 1950s. He made most of the design while working as a professor at Berkeley, and other universities. David C. Evans was one of the Bendix engineers on the G-15 project. He would later become famous for his work in computer graphics and for starting up Evans & Sutherland with Ivan Sutherland.
The icon is a close-up of the meters on the front panel; they allowed the operator to adjust the power-supply voltages until the vacuum tubes were happy. The image it was ganked from was found here.

image behind cut )
mdlbear: (hacker glider)
When we got our hotel room last night I immediately recognized our room number, 1403, as the number of the printer associated with the IBM 1401 computer.
mdlbear: (chernobyl bunny)
Atomic bombings of Hiroshima and Nagasaki - Wikipedia, the free encyclopedia
On the morning of August 6, 1945 the United States Army Air Forces dropped the nuclear weapon "Little Boy" on the city of Hiroshima, followed three days later by the detonation of the "Fat Man" bomb over Nagasaki, Japan
mdlbear: (sureal time)
Boing Boing: Use of term "flash mob" dates back to 1800s Tasmania?

Of course, neither "flash" nor "mob" meant the same then as it does now; "flash" refered to a style of dress.
image; click for original (larger) version )
mdlbear: (hacker glider)

Just finished reading What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry by John Markoff. It was a gift from Smalltalk hacker and former roommate Ted Kaehler (he gets a brief mention in Chapter 7). What's amazing about it is how many of the people mentioned in it I've met, and in many cases worked with. (Of course, having been at SAIL, Xerox PARC, and later at Zilog helps.) My wife the [livejournal.com profile] flower_cat had a similar experience; her mother was a technical writer and editor at SRI during the '60s and '70s.

It's kind of sad. There was incredible optimism in those days -- personal computers were coming, and they were going to remake society. Revolution was in the air, and computers were right there on the barricades along with sex, drugs, and rock-and-roll. The night-owl hackers at SAIL, the Peoples' Computer Company with its Wednesday potlucks, the Homebrew Computer Club meetings at SLAC (a short walk down Sand Hill from where I work these days) -- they're all gone now. I knew it was over when the 6th West Coast Computer Faire had more suits than freaks; the war Bill Gates started with his "Open Letter to Hobbyists" -- mentioned in the last chapter, and reproduced in full as the last illustration -- is still going on, and it still isn't clear who's winning.

If you'll excuse me, I'm going to crawl off to my corner and wallow in nostalgia for a while.

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated 2019-04-23 02:22 am
Powered by Dreamwidth Studios