Quantum Statement

The New Internet, the New World

Can Blogs Resemble the Human Mind?

The weblog, Metafilter (Meta Filter or MeFi), has evolved from a news filter / community blog into a reference site with a distributed folksonomy. The community of users that edit Metafilter adhere to explicit guidelines for what content is suitable for posting, how to respond to a post, and too many other rules of etiquette to list here. Wikipedia has a volumonous set of Policies and Guidelines for editing. There is also a more distilled version of their Ruleset.

Metafilter, unlike Digg, strictly enforces its guidelines. The website Metatalk is a blog dealing specifically with enforcement of these guidelines and usability issues. Typically, when a post or FPP [Front Page Post] is deemed inappropriate by community-members, it winds up in Metatalk, where Matt Haughty, Metafilter’s housekeeper, diligently weeds it out (is that a mixed metaphor?)

If you ever want to check out a fascinating document, read Metafilter’s Guidelines. Just kidding. Its really quite boring but integral to the success of the site. That and the diligence and alertness of its users. Metafilter is constantly grooming itself. Since this grooming is carried out in a semi-distributed fashion, there is less chance that anyone will be asleep at the wheel or using his or her power with abandon.

Resulting from this vigelance, Metafilter has developed a distinct character: a sharp, witty, insighteful, and even compassionate character. One of the facets of this personality is a level of quality control that takes enormous presidence. A new member of the community might find this elitist or off-putting. This trait accompanies other sites as well, such as SomethingAwful (Something Awful). These are adaptive conventions that were adopted by the community members to stave off trolls. In fact, all of these conventions were developed to insure the orderly governance of a community forum within a largely un-policed Internet.

Metafilter encourages community members to post multiple links in each post. Single-link posts are discouraged within the community but tolerated to a certain extent. The multiple links transform each post from a mere link to something much more valuable. What results is a group of links that are connected by some commonality. This commonality may or may not be implicit. It may be a series of links about saber-tooth tigers, but it might also be a series of links composed of diverse topics containing the word ‘tiger’ or ‘saber.’

Here is a classic example of a post where the links share a common subject theme.

The final chromosome in the human genome has been sequenced. The Human Genome Project has completed sequencing Chromosome 1 and has published its work in Nature here. If you’re impatient, here’s a sneak preview..
posted by BlackLeotardFront at 7:45 PM PST – 32 comments

The author of the post has gone with an obvious choice for relating these disparate links. Each link points the user to a site that is relevant to the topic of the Human Genome Project, whether it is a link to a Wikipedia article describing what a chromosome is or a link to the Human Genome Project website, all links share this common thread.

Here is another example, which is not a classical example of subject-linking, but the common thread, Stephen Merritt, stays the same.

Is Stephen Merritt a racist? Sasha Frere-Jones, the New Yorker’s Pop Critic and maybe the finest music critic writing today, has long been an activist against rockism. Stephen Merritt, the gay, white auteur behind such postmodern pop experiments as 69 Love Songs, and sometime target of S/FJ’s ire, recently got into hot water with Jessica Hopper, among others, for allegedly racist comments made at the EMP Pop Music Conference, which is Christmas and Halloween all rolled into one for music crits and their fellow nerds. Slate’s John Cook defends Merritt, claiming that disliking rap doesn’t necessarily make one a racist, and S/FJ responds with some further thoughts. But was Frere-Jones accusing Merritt of racism, specifically, or simply of wack unexamined biases? And is that a fair criticism? Slate’s readers don’t seem to think so.
posted by maxreax at 4:53 PM PST – 177 comments

Here’s a novel example of a post where not only has the author included many relevant links about Ayaan Hirsi Ali but has included links to other metafilter mentions. This type of post connects not only the external links, but those links from the earlier post as well.

Ayaan Hirsi Ali (née Magan)
has already been mentioned in several times in Metafilter. Whether you consider her a couragous campaigner for women’s rights and against Islamofascism, or a crass opportunist, there’s no denying that she’s some character. However, it now seems that her Becky-Sharp-ish rise to fame and power also left a similar trail of embittered ex-friends and lies that has ended up landing her in serious trouble with fellow right-winger (also previously mentioned in Metafilter) Rita Verdonk, Dutch Immigration Minister.
Before feeling too sorry for Ayaan, consider that she’s moving to Washington DC, where she’s landed a job at the American Enterprise Institute. I’m sure she’ll fit right in…
posted by Skeptic (34 comments total)

Each of these posts, which are relatively compact, have a quality not unlike a human memory. Memories are visceral sensations that do not adhere to any meta data scheme. If we could tag an individual memory, it could have lots of different attributes. For example, Summer Camp: (sitting under a tree, the sound of creaky bunkbeds, the color of my dufflebag, the smell of the dining hall, the sound of the bugle in the morning, the fear of going in a canoe, betrayal by a girl, etc. etc.) These tags might appear at some other point in life. I may hear a sound on the radio that sounds like the bugle from camp. The sound triggers a memory of hearing the bugle along with other thoughts that are relevant to camping as well.

One might argue that web links differ from memories in that they are malleable and vulnerable to alteration. What if my memory of the bugle call at camp were actually transformed into a beat box or removed entirely. My response is that most of our thoughts are ephemeral as well. When I recollect that bugle call, I might not get it exactly right. I might have only a fleeting impression of the bugle, or that there was a bugle, or a musical instrument. These are the pitfalls of having a brain. Sometimes memories stay crystal clear; other times, they fade depending upon how much effort is put towards preserving them.

As the web grows out of its awkward ‘text’ and ‘jpg’ phase into a more multi-media experience, we will begin to see more paralells between blogs and memories. Perhaps this is the ultimate destination of the Internet for our human culture?

Technorati Tags: neuroscience, science, blogging, metafilter, MeFi, Wikipedia, Information Science, Web2.0, Web 2.0

May 18, 2006 Posted by | Blogging, Information Science, Internet, mefi, metafilter, neuroscience, Science, Web 2.0, wikipedia | Leave a comment

Then how come I can’t remember me pin number? -Ali G

Tasnim Abbas Raza of 3QuarksDaily wrote this essay last week about computers and the brain. It took me a few readings to get through it all, but his basic premise is based on a book by Jeff Hawkins, inventor of the Palm and founder of The Redwood Center for Theoretical Neuroscience. Hawkins’ book, On Intelligence argues that the human brain is made of billions and billions of feedback loops. Its been a while since I read it, so I apologize for this lame recapitulation. These feedback loops are sending signals from your sensory organs to your memory and back again all the time and making the best guess as to how to enterprete these sensations based on past experiences.

In one particularly enlightening passage, Hawkins describes the sensation of getting his bicycle out of the garage. He’s done it before a thousand times, so it has become quite natural. When he puts his foot on the pedal, the pressure exerted against his foot is a familiar cue that reminds his brain what to do next.

To expand on this further, I am an excellent typist (85 wpm, no joke) because I’ve done it so much. The moment I put my fingers on the keyboard, my mind recollects the sensation of the keys on my fingertips along with perhaps hundreds of other invisible cues (ie my posture, the sound of the click clack of the keyboard, etc.) from the countless times I’ve done this before. In a sense, I’ve hardwired my brain for typing. This enable me to switch to a kind of autopilot where I don’t have to think about the countless steps involved in executing a simple keystroke (starting perhaps with identifying the strange symbols on weird contraption and which one to push, how hard to push it, how long to hold it down, etc. etc. etc. etc. etc.). I just sit down and go. Its hard to conceive of how many times this happens every moment. But, if you’ve ever watched a beginning computer user try and operate a computer, they have a very different thought process. Rather than, ‘click here, click here, type here,’ a computer neophyte has to start with the more basic instructions. ‘Move mouse up, arrow thingy goes up. Move mouse left, arrow thingy goes left. Get arrow thingy over blue words, push mouse button repeatedly with sufficient force to destroy its delecate components.’

In addition, Hawkins asks us to imagine if one day you took one step and the ground, rather than being where you expect it to be, wasn’t there. Your neurons would fire madly trying to figure out what is happening, but perhaps there would be no past memory to elucidate it. You take another step and again, the ground is not where you have always known it to be. From what I gather, it is at this point that you realize that your trip to the Grand Canyon has ended in disaster, but the mind is built to adapt to these types of perceptual changes. If while typing this blog, the keyboard instantly becomes a cheesebuger, my senses will immediately send a signal ‘you’re typing on a bun, you’re typing on a bun, you’re typing on a bun.’ Since I do not expect my keyboard to turn into a delicious sandwich, it kind of shuts off the autopilot and I then immediately engage my critical faculties and figure out what to do next. (In this case, probably get some ketchup).

The 3QuarksDaily piece is rather interesting, but I have a couple problems with it. Tasnim’s essay purports to be about the brain, but he does not at all discuss the brain in this essay. It is a series, however, so I expect more to come. Second, Tasnim’s premise is that the human brain works like a computer, combining tiny units of instruction into more complex instructions and connecting facets together into long strings of action.

Here’s what happens in my brain when I hear her request: I break it down into a series of smaller steps something like

Get bread: START

1. Get money and apartment keys.
2. Go to supermarket.
3. Find bread.
4. Pay for bread.
5. Return with bread.
6. END.

Each of these steps is then broken down into smaller steps. For example, “Go to supermarket” may be broken down as follows:

Go to supermaket: START

1. Exit apartment.
2. Walk downstairs.
3. Turn left outside the building and walk until Broadway is reached.
4. Make right on Broadway and walk one and a half blocks to supermarket.
5. Make right into supermarket entrance.
6. END.

Similarly, “Exit apartment” is broken down into:

Exit apartment: START

1. Get up off couch.
2. Walk forward three steps.
3. Turn right and go down hallway until the front door.
4. If door chain is on, undo it.
5. Undo deadbolt lock on door.
6. Open door.
7. Step outside.
8. END.

Well, you get the idea. Of course, “Get up off couch” translates into things like “Bend forward” and “Push down with legs to straighten body,” etc. “Bend forward” itself translates into a whole sequence of coordinated muscular contractions. Each muscle contraction is actually a series of biochemical events that take place in the nerve and muscle fibres, and you can continue breaking each step down in this manner to the molecular or atomic level.

I do not dispute this is happening, but what Hawkins argues is that the brain is not at all like a computer. The brain is sending billions and billions of signals back and forth from the sensory organs to the memory. The brain is in a constant state of alert, monitering every aspect of conscious existence (and unconcious too I imagine) and making predictions about what will happen next. When I put my foot down, the ground will be there. If the ground is not there, the memory has no basis for making the next prediction. Such is the case when walking hastily down a flight of stairs. You get to the bottom and you put your foot down expecting the ground to be there and its not. Oops, you forgot the last step. Your body falls forward, but your arms reach out automatically to grab something for support or you position yourself for a softer landing.

When a computer experiences an unexpected problem, we all know what happens. You try to open a shortcut on your desktop but you forgot that you deleted the program. So when you doubleclick the shortcut, your computer buzzes for a few seconds and then says ‘hey this program isn’t where its supposed to be. Now what.’ If the computer had a brain like a human, it would react by saying, ‘you dumbass, you deleted that program last week because you updated it to a newer version. Here, let me change this shortcut for you so it works.’

Another problem I have with Tasnim’s writing is his explanation of computer processes. In order to add strength to his argument about the hierarchical functioning of both the brain and the computer, Tasnim attempts to write in a hierarchical style. He starts by explaining two or more basic concepts in clear English, then he combines them into a more complex concept, believing that the reader should be able to make the ‘logical leap’ to understanding. This doesn’t work to good affect. For one thing, its very difficult to connect abstract concepts that you’ve just learned. As I said before, reading this essay took several days. I had to go through it piece by piece, working out each section before I could go onto the next one.

The best way I can explain this is going back to high school. I’m sitting in Algebra class, trying to figure out what the hell is going on. The teacher is writing a problem on the board. She asks me if I can solve it. I try and fail because my mind has not resolved it because I have never seen such a problem. The teacher then explains to me step by step how to solve the problem in a way I understand it. That night, I get home and I start on my algebra homework. By Tsanim’s explanation, I should have no trouble breaking the problem down into its constituent parts and solving it the way I have seen earlier in the day. But that does not actually happen. I try to remember the steps that my teacher told me, but since I only saw her do the problem once, the instructions are incomplete and difficult. I get stuck, I make a mistake, I don’t know what part to do next. The next day, the teacher asks me to solve the algebra problem. I explain to her that I couldn’t figure it out, and so she explains it to me. At this point, I experience the visceral sensation of ‘getting it.’ The instructions that she gave me earlier were good, but I needed to attach them somehow to my own experiences with the given problem in order to conceptualize it. I have now seen the my own error in conceptualizing the problem and have resolved it, so when I go home that evening, I am able to solve a different algebra problem much easier.

Computers don’t work like that. They follow very explicit instructions that allow them to solve a particular type of problem over and over again. See Turing Machine. They don’t often make guesses although some computer chips are designed to make predictions about what might happen next in order to make the machine run faster.

Overall, I enjoyed Tasnim’s article, but I have seen better primers on how computers work. One is a very useful e-book (which I found pre-del.icio.us, so I can’t link to it). I will post a link to it in my next entry, IF I CAN REMEMBER WHERE I PUT IT.

Technorati Tags: , , , ,

April 7, 2006 Posted by | Authors, Jeff Hawkins, neuroscience, technology | 4 Comments