Archive for February, 2010

exhausting

Charles and I have been watching the Olympics avidly, as it’s provided a welcome and unexpected outlet for the violent anti-Canadian sentiment that we hadn’t even realized was building up inside us, poisoning our hearts and minds. They think they’re so great. Or at least, they think they’re okay, but will resolve to try harder. Those bastards.

The Olympics come with various apocryphal stories, most having to do with athletes overcoming personal adversity.  But Charles also heard a good one about the South Korean speed skating team’s training regimen.  Supposedly they’re in the habit of doing interval training where they sprint for 40 seconds, then walk for 20, then repeat the cycle seven more times.  Charles said he’d tried it and it was a surprisingly exhausting way to spend 8 minutes.

I can back that up.  When I was done with my first attempt, I could swear I tasted blood.  I kept walking around the track for what seemed like forever, afraid to let my heart rate drop to the level it desperately wanted to.  Eventually I was fine, though I felt like I had completely exfoliated the inside of my lungs.  Man, was it exhausting.  And I was cheating, using 30 second rests!

Perhaps your cardiovascular system is not as pathetic as my own (I hope not; these lungs have proven to be serious underperformers).  Either way, this is worth a try.  And as a bonus, it produces an unusually bumpy and cool-looking heart rate graph, captured courtesy of the neat Garmin monitoring doohickey that Emily bought for me a year ago (that first peak is from me climbing up the six floors to the YMCA’s indoor track).

the annotated roundup

The thing about my uncle is 100% true.

yuppie PSA

I just changed the filters on my Etymotic headphones for the first time.  It’s been years since I got them, and I’m an idiot for not doing it sooner.  I had honestly forgotten that these things — anything, really — could sound this good. I thought I was just getting hopelessly old.

If you own a pair of these earphones, and have owned them for a while, go buy some replacement filters.  I didn’t believe the manual when I read it, either, but this actually does make a huge difference.  Wow.

fly, EAGLE, fly

I made my first EAGLE schematic!  I’m sure it’s horribly broken, but I’m still feeling pretty good about it.


If you’re interested in getting the source file, head over to the post on Sunlight Labs.  And if you do, please be gentle.  Advice on what I’ve gotten wrong would be welcome, though.

ALSO: Thanks to the diligent outreach efforts of Sunlight’s own Nicko Margolies, the original post has now been picked up by Hack-a-Day and MAKE. Neat! And better still, educational: I’ve already had a number of revisions suggested to me by the Hack-a-Day commenters.

ALSO ALSO: Engadget, too! Though I’ve gotta say: man are the commenters there morons.

it'd be so money, bro

The wages of internet success, I suppose.

AAAAAND: Gizmodo. That’ll just about do it, I think. Their commenters are mean. Which I like. I want the apparent EE to offer some more information, though.  I think he’s at least partly wrong.

to clarify

Matt has graciously responded, pointing me toward a David Chalmers essay that I think I’ve skimmed in the past. Revisiting it, I think I didn’t express the question I’m wondering about clearly enough.  Chalmers (and Matt) seem to basically be saying that it’s not worth letting Cartesian hypotheticals keep you up at night, no matter how irrefutably plausible they may be.  I agree!

But what I find interesting about the holographic hypothesis is what Chalmers dismisses at the end of this passage:

The Computational Hypothesis says that physics as we know it not the fundamental level of reality. Just as chemical processes underlie biological processes, and microphysical processes underlie chemical processes, something underlies microphysical processes. Underneath the level of quarks and electrons and photons is a further level: the level of bits. These bits are governed by a computational algorithm, which at a higher-level produces the processes that we think of as fundamental particles, forces, and so on.

The Computational Hypothesis is not as widely believed as the Creation Hypothesis, but some people take it seriously. Most famously, Ed Fredkin has postulated that the universe is at bottom some sort of computer. More recently, Stephen Wolfram has taken up the idea in his book A New Kind of Science, suggesting that at the fundamental level, physical reality may be a sort of cellular automata, with interacting bits governed by simple rules. And some physicists have looked into the possibility that the laws of physics might be formulated computationally, or could be seen as the consequence of certain computational principles.

One might worry that pure bits could not be the fundamental level of reality: a bit is just a 0 or a 1, and reality can’t really be zeroes and ones. Or perhaps a bit is just a “pure difference” between two basic states, and there can’t be a reality made up of pure differences. Rather, bits always have to be implemented by more basic states, such as voltages in a normal computer.

I don’t know whether this objection is right. I don’t think it’s completely out of the question that there could be a universe of “pure bits”. But this doesn’t matter for present purposes. We can suppose that the computational level is itself constituted by an even more fundamental level, at which the computational processes are implemented. It doesn’t matter for present purposes what that more fundamental level is. All that matters is that microphysical processes are constituted by computational processes, which are themselves constituted by more basic processes. From now on I will regard the Computational Hypothesis as saying this.

I don’t know whether the Computational Hypothesis is correct. But again, I don’t know that it is false. The hypothesis is coherent, if speculative, and I cannot conclusively rule it out.

The Computational Hypothesis is not a skeptical hypothesis. If it is true, there are still electrons and protons. On this picture, electrons and protons will be analogous to molecules: they are made up of something more basic, but they still exist. Similarly, if the Computational Hypothesis is true, there are still tables and chairs, and macroscopic reality still exists. It just turns out that their fundamental reality is a little different from what we thought.

The situation here is analogous to that with quantum mechanics or relativity. These may lead us to revise a few “metaphysical” beliefs about the external world: that the world is made of classical particles, or that there is absolute time. But most of our ordinary beliefs are left intact. Likewise, accepting the Computational Hypothesis may lead us to revise a few metaphysical beliefs: that electrons and protons are fundamental, for example. But most of our ordinary beliefs are unaffected.

Those “few metaphysical beliefs” are important, though! Contrary to what Chalmers implies, similar fundamental discoveries in other domains have, in fact, greatly informed our concept of how consciousness operates.  The understanding that the brain is the seat of the mind; and that neuronal firing is essential to its function; and that that function can be mediated by drugs or damage which can alter reported phenomenal experience and, we have strong reason to suspect, the mind itself — these may all be philosophically irrelevant from Chalmers’ perspective, as none of these have seriously shaken our faith in personal agency or qualia or the integrity of the conceptual world we inhabit or anything like that.  Chalmers would probably not go this far, but I think personal experience has an irresistible, biologically-determined immediacy, and the practical, personal psychological upshot of our discoveries about consciousness seems almost certain to be minimal.  Being alive is going to keep seeming the way it currently does.

But the aforementioned discoveries did give us some good clues about the limits of consciousness (its time resolution, for instance), and avenues for thinking about how to create it artificially, and how morally concerned we should be about canned tuna’s dolphin-safe status. Certainly they blew dualism right out of the water (as far as most people are concerned).  It seems like the truth of the holographic hypothesis — and that we experience ourselves as part of the holographic projection and not of the underlying lower-dimensional brane — could also have some implications for how we think about, say, the possibility of panpsychism.

Or maybe not!  My aim is not to imply that the HH could be cause for a “nothing is real!”-style freakout, but I do think there might be more meat here than Matt’s first impression implies.

On the other hand, the most likely explanation is that I’m fundamentally misunderstanding (or New Scientist misconstruing) the HH.

posts I am not qualified to write

Or rather, posts I am so unqualified to write that even I am not comfortable writing them (but I wish Julian or Yglesias or Dylan Matthews or Matt Zeitlin or anyone else who’s done some of the relevant reading would):

If our world really is a hologram*, what does is mean for the philosophy of mind that phenomenal experience* seems to occur at the holographic level rather than at the level of the lower-dimensional surface* (or brane, more technically, I guess)?  Does it bolster the case for consciousness-as-epiphenomenon (I think maybe, if the hologram can be created in multiple ways with varying underlying conditions, it nudges us toward an explicitly supervenient relationship)?

* I realize that all of these sound like the sort of nonsense that freshmen would ponder while stoned.  But that’s only half right: it’s not actually nonsense.

electronic door-opening excitement!

I just put up a post over at the Sunlight Labs blog detailing the electronic adventures involved in getting our office door on the network.  Still to come: a post detailing how my colleagues made the system accessible via iPhone, Droid, and plain-ol’ telephone system (with an assist from Twilio). I still need to solder in a bypass capacitor — the circuit’s not performing quite as securely as it did in the breadboard (although with a door made of glass it’s not like we’re securing Fort Knox).  But in general I’m really pleased with how this project turned out.

The Wolfman

(Spoilers ahead)

Emily and I saw it on Valentine’s Day, and although its lack of sexy vampires meant it was starting at a disadvantage vis a vis other werewolf movies, it was still fairly good.  In fact, for the first third of the movie it was just about everything you could want from a werewolf movie.  The moors were misty, the townspeople were appropriately panicky, and the beast — unspoken, unseen — was terrifying.

Anthony Hopkins is especially good — stacked up against a bunch of lesser and/or less professionally diligent actors, it becomes immediately apparent just how real a quality charisma is.  He does a great job as the obvious Colonel Kurtz figure, telegraphing the coming action to the audience in a dread-inducing way while keeping his character’s status close enough to ambiguity that the rest of the cast’s “oh I think maybe he’s just depressed” treatment of him remains plausible.  Hell, for the first act even Benicio del Toro seems okay, if you allow yourself to succumb to the tempting fantasy that his puffy, waxy aristocrat is being played that way to better contrast with his coming bestial descent. Actually, though, it turns out he’s just a bad actor!  Or doesn’t give a shit.  Either way.

Unfortunately, things degrade once the filmmakers let the werewolves out of Scotland.  The Victorian London on display is even less imaginative than the one from The League of Extraordinary Gentlemen.  And, as al3x points out, a proper werewolf demands a snout.  I get that they were intentionally sticking with the classic wolfman look, but it really needed more of an update from that “I would have combed my hair if I knew I was going to be photographed” look.

The movies does nothing with the tension-creating dynamics of the rules surrounding werewolf transformations.  The feints toward the destroying-someone-you-love theme that’s essential to werewolf stories seem half-hearted. The first act culminates in a fantastic scene at a gypsy camp, but everything else involving the gypsies turns out to be a red herring.  In fact, everything after that scene is either completely banal and straightforward, or a half-assed fakeout that does nothing but waste your time and attention.  And Hugo Weaving continues to annoy and disappoint me whenever he’s not playing a computer.

But man, what could’ve been! Really, the first third is very well done, and well worth sitting through when it arrives on HBO.  Hopefully by then you’ll be able to stream Dog Soldiers off Netflix, and you can turn off the later parts of Wolfman in favor of that.

thoughts on Buzz

  • It’s interesting to watch how people are using the service, and to try to deduce the norms that will soon emerge arround it.  I just de-linked my Flickr account because I realized I didn’t mean to push a recently-uploaded photo on my followers.  I still have Twitter linked to it — though given the FB status/Twitter faux pas, I suspect I’ll remove that connection soon, too.
  • The automatic importation of contacts strikes me as a big mistake.  Not only because it’s a privacy problem, but because it short-circuits the normal lifecycle of social networks. It’s a profoundly elitist opinion, but I do think that it’s important to have an initial phase during which early-adopting users fill a new service with high-value content — amusing, uncensored, nonprofessional/noncommercial communication — creating a attractive networking target for the rest of the population, which then filters in.Instead, Google has  opted to drop its users into the midpoint of its new network’s lifecycle.  I’m not sure this is a bad idea, exactly — I’d love to have a network with the immediacy of Twitter, but (slightly) looser space and media limitations (I’ll be curious to see whether Google has built the infrastructure behind Buzz to make more Twitter-like use cases possible). In theory, Buzz can satisfy that desire: it’s basically FriendFeed, but with a very high adoption rate among my contacts thanks to Google’s marketing advantage.  But because it skipped the (ahem) buzz-building phase, Buzz will never garner the excitement and accompanying celebrity and media evangelism that Twitter has.
  • Similarly, I doubt I’ll adopt an evangelical stance toward the service among my peers — I don’t do that very often, don’t have much of a talent for it, and don’t want to spend personal capital pushing a new social network (which, these days, is always a low-percentage play).  But I’ll keep watching, and I’ll keep using Buzz if other people do.

oh yeah

I was too busy sitting in a board meeting/getting horribly sick to mention it at the time, but Progressive Fix published another post from me, this time about why I’m not entirely comfortable with strong claims for net neutrality in the wireless space. Basically: in this case, critics’ fondness for spreading FUD about neutrality being a network-killer is at least a little more justified than it normally is.