Archive for the ‘science’ Category

Wakemate

About three weeks ago I finally received my Wakemate. A part of the burgeoning quantified self movement and yet another example of a product made possible by the last half-decade’s debut of cheap silicon accelerometers, it’s exactly the kind of thing you’d expect me to buy.

Wakemate is built on three ideas borrowed from sleep research. First: we experience a recurring cycle of sleep states during a night’s rest. Pretty much everyone’s aware of this, if only because it was part of an episode of Star Trek. Over the course of a night you spend progressively less time in a deep sleep state, and more in light states where dreaming occurs.

Second: these sleep states are measurable using a technique called actigraphy. As this paper explains, during sleep the motion of your non-dominant wrist seems to correlate pretty well with more precise measures of sleep state. You can get a decent measurement of sleep state just by tracking what your left hand is up to.

Third: your level of grogginess upon waking varies depending on which part of your sleep cycle you’re in when your alarm goes off. This is known as sleep inertia, and the WM’s creators have a few paper excerpts about it here.

The Wakemate folks took these three ideas and combined them — in a way sure to elicit much (potentially justified) tongue-clucking from sleep researchers — into a product. Put on a wristband, load a program on your phone, and set a twenty-minute window during which you’d like to wake up. The device keeps watch during that time period for moments when you seem to be in a light sleep state, doing its best to find one and rouse you in a way that minimizes grogginess (if it doesn’t find one, it’ll wake you up at the end of the time window). The idea’s so clever that I barely care whether it works.

Snakebit

I first heard about all of this from my colleague Kevin back in February of last year. It sounded like an interesting idea, and for just $5 you could reserve your place in line for the device (it ultimately cost me $50; it’s now selling for $60). Wakemate is a Y Combinator startup, and its founders went through a semi-hilarious series of problems as they tried to ship their first product. Bad wristbands. Delayed electronics. Problems with Apple certification. The thing finally arrived, months late; the next day I got an email warning me that the included power adapter might burn my house down. And for the first week or so, the app only woke me up at the end of the 20-minute window — at the fail-safe point — seemingly because it wasn’t able to communicate with the wristband (I had to reboot the latter unit multiple times to get the night’s data downloaded). With the exception of the charger (any USB adapter will do), all of these problems have been fixed. But it was a bumpy ride. Kevin still hasn’t received his.

Surprisingly Plausible

Here’s the source data from last Thursday’s sleep, and Wakemate’s classification of that data into sleep states.

This seems kind of reasonable! Check out the huge spike at the beginning of the accelerometer time series. That’s when I was still awake and reading. Over the course of the night I went through about four cycles, spending less time in deep sleep each iteration. You can see four clusters of movement data, too. This isn’t the cleanest night’s worth of data — I didn’t feel like clicking through all of them to find the tidiest — but as I’ve looked at these over the past few weeks, I haven’t yet seen any patterns that seemed implausible either in terms of the reported sleep cycle pattern or its correlation to the underlying movement data.

Does It Work?

At first I was a bit disappointed: the central gimmick of the WM didn’t seem to be working. If anything, I seemed to be groggier than usual when I woke up. But as I already mentioned, I eventually realized that the alarm was only going off at the end of the twenty minute window. I emailed WM’s extremely responsive support line and was told that the issue had already been fixed in software and was just waiting on Apple certification. Happily enough, I was able to download the update by that evening. And although the days since have seen a suspicious number of wakings during the first minute of the alarm period, I’m actually surprised to report that it might be working. I’m still plenty groggy during the minute or two when I futz with the alarm (and report my level of alertness using the software slider). But I’ll be damned if I don’t seem to snap out of it sooner than usual.

On the other hand, this may not have anything to do with the timing of the alarm: it might just be that I’m getting more sleep. Which brings me to the best thing about Wakemate.

Data Porn

I was most excited for the alarm functionality, but the analytics package that WM provides has proven to be its most compelling feature. Your nightly sleep data is uploaded each morning and placed into an attractive interface. You can easily find information about time spent asleep, how long it took you to fall asleep, and how many times you woke up in the night. It’ll also show you how your recent performance in these areas compares to your career average, and to that of the entire population of WM users.

You can also tag each night’s sleep when you set the alarm — did you read before bed? go to the gym? drink alcohol? — and perform comparisons between tags.

Perhaps less helpfully, WM provides a “Sleep Score”. I can’t find any detailed information about how this is calculated — I suspect that this opacity is intentional, both to allow the formula to be tweaked and to keep users from trying to game it. And while it’s sort of amusing to have competitive sleeping leaderboards (how does Justin Sweetman sleep so virtuosically?), the scores seem to me to be basically bullshit. I tend to score highest when I’ve gone to bed late and with alcohol in my system; as you might guess, my scores don’t correlate very well with how rested I feel. You seem to be penalized for “low quality” sleep, even if it means more sleep — in other words, collapsing from exhaustion and sleeping like a corpse for three hours might earn you a higher sleep score than getting a normal night’s rest.

Since I’m on a bit of an Excel kick, here’s a plot of my sleep scores versus minutes asleep (WM recently added the ability to download your data as a CSV, which is nice of them).

Admittedly, I don’t yet really have enough data for that trend line to be meaningful. But I have my suspicions.

Still, I’ve actually found the product to be worthwhile, not just as an interesting exercise in navel-gazing. For instance, it turns out there’s a reason my Sundays aren’t very productive:

I honestly had no idea I was getting so little rest on weekends.

In general, I’d say that it’s been surprising and useful to have the amount of time I spend asleep quantified. I’ve always needed a relatively large amount of rest in order to function. I have nothing but admiration (and jealousy) for those of you who get five hours a night, hop out of bed, write a thousand words and run a half marathon. But I just can’t do it. At the absolute depths of puberty/hibernation my body, when left to its own devices, was helping itself to twelve or thirteen hours of sleep a night. That’s thankfully not necessary any more, but I’m certainly not at my best when I get less than eight hours.

Wakemate has actually been useful for telling me when I’m not taking very good care of myself, and has provided a small but real incentive for paying attention to when I should call it a night. Admittedly, you can see that incentive diminishing in the above graph as the novelty of the WM wears off. Still, I’ve found the information useful.

Anyway, if it sounds appealing, you might want to give it a try — although until I’m more convinced of the alarm’s utility, I’d suggest considering the FitBit as well. I haven’t tried FB, but in addition to sleep analysis it quantifies your activity during the day, which might be interesting. It hasn’t got any anti-sleep-inertia alarm functionality, but perhaps that’ll be added later.

to clarify

Matt has graciously responded, pointing me toward a David Chalmers essay that I think I’ve skimmed in the past. Revisiting it, I think I didn’t express the question I’m wondering about clearly enough.  Chalmers (and Matt) seem to basically be saying that it’s not worth letting Cartesian hypotheticals keep you up at night, no matter how irrefutably plausible they may be.  I agree!

But what I find interesting about the holographic hypothesis is what Chalmers dismisses at the end of this passage:

The Computational Hypothesis says that physics as we know it not the fundamental level of reality. Just as chemical processes underlie biological processes, and microphysical processes underlie chemical processes, something underlies microphysical processes. Underneath the level of quarks and electrons and photons is a further level: the level of bits. These bits are governed by a computational algorithm, which at a higher-level produces the processes that we think of as fundamental particles, forces, and so on.

The Computational Hypothesis is not as widely believed as the Creation Hypothesis, but some people take it seriously. Most famously, Ed Fredkin has postulated that the universe is at bottom some sort of computer. More recently, Stephen Wolfram has taken up the idea in his book A New Kind of Science, suggesting that at the fundamental level, physical reality may be a sort of cellular automata, with interacting bits governed by simple rules. And some physicists have looked into the possibility that the laws of physics might be formulated computationally, or could be seen as the consequence of certain computational principles.

One might worry that pure bits could not be the fundamental level of reality: a bit is just a 0 or a 1, and reality can’t really be zeroes and ones. Or perhaps a bit is just a “pure difference” between two basic states, and there can’t be a reality made up of pure differences. Rather, bits always have to be implemented by more basic states, such as voltages in a normal computer.

I don’t know whether this objection is right. I don’t think it’s completely out of the question that there could be a universe of “pure bits”. But this doesn’t matter for present purposes. We can suppose that the computational level is itself constituted by an even more fundamental level, at which the computational processes are implemented. It doesn’t matter for present purposes what that more fundamental level is. All that matters is that microphysical processes are constituted by computational processes, which are themselves constituted by more basic processes. From now on I will regard the Computational Hypothesis as saying this.

I don’t know whether the Computational Hypothesis is correct. But again, I don’t know that it is false. The hypothesis is coherent, if speculative, and I cannot conclusively rule it out.

The Computational Hypothesis is not a skeptical hypothesis. If it is true, there are still electrons and protons. On this picture, electrons and protons will be analogous to molecules: they are made up of something more basic, but they still exist. Similarly, if the Computational Hypothesis is true, there are still tables and chairs, and macroscopic reality still exists. It just turns out that their fundamental reality is a little different from what we thought.

The situation here is analogous to that with quantum mechanics or relativity. These may lead us to revise a few “metaphysical” beliefs about the external world: that the world is made of classical particles, or that there is absolute time. But most of our ordinary beliefs are left intact. Likewise, accepting the Computational Hypothesis may lead us to revise a few metaphysical beliefs: that electrons and protons are fundamental, for example. But most of our ordinary beliefs are unaffected.

Those “few metaphysical beliefs” are important, though! Contrary to what Chalmers implies, similar fundamental discoveries in other domains have, in fact, greatly informed our concept of how consciousness operates.  The understanding that the brain is the seat of the mind; and that neuronal firing is essential to its function; and that that function can be mediated by drugs or damage which can alter reported phenomenal experience and, we have strong reason to suspect, the mind itself — these may all be philosophically irrelevant from Chalmers’ perspective, as none of these have seriously shaken our faith in personal agency or qualia or the integrity of the conceptual world we inhabit or anything like that.  Chalmers would probably not go this far, but I think personal experience has an irresistible, biologically-determined immediacy, and the practical, personal psychological upshot of our discoveries about consciousness seems almost certain to be minimal.  Being alive is going to keep seeming the way it currently does.

But the aforementioned discoveries did give us some good clues about the limits of consciousness (its time resolution, for instance), and avenues for thinking about how to create it artificially, and how morally concerned we should be about canned tuna’s dolphin-safe status. Certainly they blew dualism right out of the water (as far as most people are concerned).  It seems like the truth of the holographic hypothesis — and that we experience ourselves as part of the holographic projection and not of the underlying lower-dimensional brane — could also have some implications for how we think about, say, the possibility of panpsychism.

Or maybe not!  My aim is not to imply that the HH could be cause for a “nothing is real!”-style freakout, but I do think there might be more meat here than Matt’s first impression implies.

On the other hand, the most likely explanation is that I’m fundamentally misunderstanding (or New Scientist misconstruing) the HH.

beeronomics

Matt has taken my offhanded complaint about DC beer prices and placed it within the context of social justice, noting that DC’s high wages account for its high beer prices (our average drink prices are comparable to New York’s; our median annual wage, at $57.1k, is somewhat higher than New York’s, at $52.8k; both are significantly above the national median of $42.3k). His basic project is all to the good: thus is the charge of any thoughtful person concerned with the common weal and/or our ability to drink away worries about the state it’s in.

However! I object to his specific analysis of the situation for two reasons.

The first concerns the relevance of the wage figures Matt cites — I think the average DC beer-buyer is poorer than those numbers imply.  Matt’s reliance on pan-workforce statistics is understandable, but still insufficient for the task at hand. One needs to look at the bar-going slice of the population in order to characterize the market for happy hour beer. Sadly, the BLS does not consider this a focus of their work. However! Both New York and DC are on the high end of the marriage age spectrum, so let’s assume that bar-going rates are roughly similar for both populations. I contend that DC’s drinking class is likely to be impoverished relative to New York’s, for two reasons. First, DC’s young professionals are likely to have a substantially larger average debt burden than those of New York, given that we have twice the incidence of graduate degrees and, one imagines, student loans (UPDATE: proof!). Second: while I don’t have stats to back it up, the incidence of unpaid internships in this city must be abnormally high, even when compared to New York’s wage-depressed economy of starry-eyed small-town dreamers.

The second objection is based on evidence that the supply of alcohol in DC is artificially constrained, producing higher prices than would otherwise be found. I’m pretty sure that Matt’s aware of this effect, because I think I stole the idea from him. Namely: the terrible regulatory situation faced by DC bars. A search for retailers licensed to sell alcohol for on-premises consumption in New York, NY yields 14,718 records. An extremely generous summing of DC’s licensed alcohol sellers — I included everyone but grocery stores, liquor stores and caterers — shows 1,039 licensees. Normalizing this to population is challenging: liquor licenses aren’t administered at the MSA level, and it should be obvious that a larger proportion of DC’s happy hour attendees are from the suburbs than is the case for New York. But Wikipedia puts the workday population of DC at around a million, so let’s run with that number. If we do, we can see that DC has 1.039 milli-watering holes per capita (mWHPC), versus a robust 1.760 mWHPC for New York City (using per capita rates across the entire population — not just the bar-going segment — because of our our assumption that the demographics (if not the economics) of bar attendance are similar in both cities). Even using the formal population of DC (591,833) leaves New York in the lead, with a DC score of 1.756. And this is to say nothing of New York’s later last call, which provides the city’s residents with 8.3% more drinking hours per bar (marginal though they may be).  Normalizing the mWHPC score to DC’s last call, New York emerges with a commanding 1.91 mWHPCw(2AM) — nearly twice as much as DC!*

The significance of this disparity is bolstered by looking at alcohol consumption stats.  These are actually pretty hard to track down on an MSA level, and obviously state figures won’t do when making comparisons to DC. This admittedly-dated survey is about the best I could find, and shows a rate of alcohol use over a 30-day window that’s five percentage points higher for DC than New York. All else being equal, one would assume that the per-capita quantity of bars would be higher in harder-drinking cities.  Yet in this case we see exactly the opposite.

This all suggests to me that wages aren’t the whole story — at least when that story is told in comparison to New York — and that regulatory forces play a significant role in distorting the DC market for alcohol by keeping supply artificially low relative to demand.

* As I went to sleep last night I realized that, unless something has changed since I last stayed out late and DC bars are now open for 24 of the day’s 26 hours, my math is wrong. The real percentage increase caused by a later last call should probably be substantially higher, even though a contrary effect should be introduced by applying something like a discount rate to the extra hours themselves.  So I can’t really propose a firm number, though using no discount and assuming drinking begins at 5pm would give you an adjustment weight of 1.22; it therefore seems safe to say that New York’s weighted score should be no more than 2.15.

lithium

I need to resist the urge to drone on and on about this, but having failed to heed Matt’s encouragement to weigh in on the vital “worst bar in DC” debate, I feel obliged to pick this one up: why is CNAS’s post about lithium dumb?

First, some caveats: other than a brief tour through the world of materials science as an undergrad, several years’ worth of pestering ChemE PhD friends about related issues, and time spent watching resource-shock warnings fail to materialize, I have no credentials to offer on this score.  Defer to official-looking PDFs!

Onward. The first and most galling part is this:

Lithium is the lightest metal in nature and an excellent conductor of electricity, and these two properties make it especially useful for batteries.

This is just completely wrong. Lithium is useful for batteries because of its pronounced electrode potential.  Its lightness is a welcome attribute too, of course.  But the person who wrote that sentence simply doesn’t know much about batteries.

The other problem with the article is its wishy-washy buy-in of the peak lithium frame (the idea has been more forcefully expressed elsewhere).  Most of the world’s production of lithium ore comes from just a few places, and while CNAS acknowledges that they’re politically stable, Bolivia is on the docket as also having a lot of lithium, and what if they become the key source?!

Anyone who looks for them will see these resource-scare stories pop up on sites like Slashdot every once in a while.  It’s frustrating, and all too easy to fall for, but the dire scenarios they foretell never seem to come to pass.  I remember a professor with grave concerns about the unreliability of Kazakhstan’s chromium barons; yet a decade later, I have more stainless steel-clad consumer crap than ever.  And he was an expert!

With the general public, the capacity for resource fearmongering is much worse.  Most people don’t understand where metal comes from (admittedly, it is sort of mysterious).  As they attempt to puzzle out this question, the facts that they have to work with tend to be A) we’re running out of oil! and B) prospecting for gold looks hard in westerns.

Neither of these are really applicable to most mining.  Oil’s primary use is as fuel — it’s energy, not a durable good, and consequently we run through a ton of it.  As a result, small price shifts can have really serious consequences across the economy.  If you had to buy a new laptop battery every time your old one ran out of juice, the prospect of increasing lithium prices would concern me much more than it does.

Gold, meanwhile, is a heavy and therefore scarce element, and its nonreactivity helps to ensure that it’s difficult to mine.  It has some valuable properties, but it’s mostly valuable because it’s so rare.  Applying lessons about the difficulty of mining gold to the mining of other substances is a mistake.

Here’s the thing: ore is just material that has a little more of the substance you want in it than regular old rocks and dirt.  It’s a slightly better starting material, and therefore a more economical one to process.  But there are lots of grades of ore.  It sounds like Bolivia’s got some good stuff!  If that doesn’t work out, though, we can probably find some not-quite-as-good stuff (look! Wikipedia says a firm has figured out how to economically extract lithium from hectorite clay!). Or we can reactivate those not-currently-economical American mines that the CNAS post alludes to. Or we can recycle more lithium.  All it takes is money and a reckless disregard for the environment.  This is one of those things that the market really will solve.

So (still-theoretical) problems with our lithium supply could be bad news for the economic viability of electric cars, but we should resist the idea that we’re going to run out of any minerals.  I think we’re a long way from needing to panic about lithium supplies the way that we need to panic about oil supplies.

(Helium, though — boy, I don’t know. My friends Jeff and Marie have got me worried about it. Seriously: we need it to cool some extremely interesting magnets, it’s only found as a byproduct of natural gas extraction — the helium is trapped underground along with the NG — and once it’s in the atmosphere it diffuses into space. Stop buying balloons, you monsters!)

coal is a much better bicyclist than you

20090716_coalhelmet.jpg

Yglesias links to a new blog about “sustainable mobility”. As a bike triumphalist, this is right up my alley. But the post at the top of the page is… unfortunate. Entitled “There Must Be a Catch, Right?”, it discusses a student’s proposal to attach power-generating systems to the fleets of bikesharing programs, collect riders’ spare energy, sell it back to the grid and pass the savings on to the consumer. It sounds great! Until you start doing math!

There’s some confusion about whether the power would come from regenerative braking or is siphoned off during pedaling. For a moment, let’s keep this in the realm of the plausible and stipulate that it’ll be from regenerative braking (anybody who’s used a generator-powered bike light knows that they make pedaling unpleasantly difficult). How much energy could be harvested from a cyclist coming to a complete stop? Well, let’s specify an implausibly heavy average cyclist (100 kg/220 lbs), an implausibly heavy bike (20 kg/45 lbs), an implausibly fast speed (48 kph/30 mph). Plug it into this equation and you’ll get 10,773 joules per stop. Now let’s specify an also-implausible 100 stops per mile — you’d be accelerating to 30mph and stopping every 53 feet. How much energy would you generate for every mile traveled?

The depressing answer: about 0.2 kilowatt hours. Which, using these figures, works out to about 2.25 cents’ worth of electricity.

And of course not only are the above figures unrealistically optimistic, but the impracticality of having everyone drag along extension cords introduces new problems: the battery system will cost you a lot of energy. The conversions from kinetic energy to electrical energy, from electrical energy to chemical energy, and from chemical energy back to electrical energy will all be far from perfectly efficient. I wouldn’t be surprised to see this eat up half of the energy generated.

Sadly, this doesn’t look like a compelling case for spending money to outfit bikes with regenerative braking systems. You’ll save much, much more energy by simply avoiding motorized transportation than you will by trying to squeeze more energy out of your bike.

But although this is bleak news for this particular instance of innumerate ecothusiasm, I still find the situation kind of inspirational: it’s a reminder that the forces we casually harness are incredibly vast when compared to the relatively meager capabilities of our biology. That’s not bad for a species that watches as much Law & Order as we do.

the worst part is that the jokes are already tired. well alright, not the WORST part.

CNN:

Researchers do not know how the virus is jumping relatively easily from person to person, or why it’s affecting what should be society’s healthiest demographic. Many of the victims who have died in Mexico have been young and otherwise healthy.

It’s too early to say if this is the mechanism (or even if a genuine trend exists making young people more susceptible), but one possible explanation is a cytokine storm, in which the body’s immune system reacts so violently that it causes damage. There’s an interesting discussion of all this over at ScienceBlogs. Luckily for me, my immune system kinda sucks.

don’t you wish you knew more about transistors?

Of course you do!

fascinating

I had no idea.

wires are the best!

I’m now hopelessly late with this, so I’ll try (and fail) to make it brief. On Friday Ryan discussed D.C.’s ban on overhead “catenary” wires, which would be necessary for electric streetcars. Apparently you can’t use an electrified sunken rail in cities that have to salt their roads — there are corrosion issues. Unfortunately, changing the law to allow overhead wires would require congressional involvement. Ryan mentions a company that’s pushing a technology providing for the wireless transfer of power from the street to the streetcar, allowing the system to be sealed and immune to salt. It sounds like a pretty clever solution. But that doesn’t make it a good one.

Like a transformer, this technology works through induction, converting electricity to magnetism and back again. In a normal transformer you have a core of some sort — picture an iron ring — and you’ve got two wires, one for input and one for output. These get wrapped around the core like in this picture. Send alternating current into the input wire and its wrappings will generate a magnetic field, which will be conducted along the core, which will excite electrons in the other wire’s coils and generate an output current. If you change the ratio of windings on the input and output coils the voltage will change, too, which is a very useful thing to be able to do. As you might imagine, transforming energy from one form to another in this way isn’t perfectly efficient (although in large, well-designed units it can be very close to it). Some electricity is lost to heat, which is why those heavy old wall-wart adapters — heavy thanks to their iron cores — tend to be warm to the touch after they’ve been plugged in for a while.

Newer “switchmode” adapters use a different technique for changing voltage levels. This method doesn’t require an iron core, which is why they can be so much lighter and smaller. Strictly speaking, transformers don’t need the iron core, either. The problem is that they’re much less efficient without one — which brings us to inductive power transfer and the streetcars. In this case one coil is sitting in the street and one’s sitting in the streetcar. The core, such as it is, is made of air, which is terrible at guiding magnetic fields, and gets ever-more terrible the further apart the coils are — this is not a useful technique for moving power anything but very short distances.

The point of all this is that you’re going to be wasting energy if you try to move it around wirelessly. Worse, it’s going to be more expensive to build the system to do all this than it would be to just make the connection with wires. It’s not as if inductive charging hasn’t been tried in the marketplace: some early electric cars used inductive paddles to charge, and various efforts are intermittently made to provide magical laptop-charging desks. But aside from electric toothbrushes — which, as the previous link notes, can afford to waste some energy if it lets them stay watertight — there just aren’t that many applications where choosing a more expensive, more wasteful way of transmitting energy makes sense.

And I have a feeling that the same will prove to be true of streetcars. Wikipedia cites an 80% efficiency number from an experimental bus system developed in 1990. That was a while ago, but it’s hard for me to imagine that the situation has gotten that much better — or that real-world applications could match the performance of a system in a closed test track. It’s going to cost more money than catenary wires, and you won’t be able to get your coils very close together on hilly streets, so your electricity bill is going to be 10% or 20% more than it would if you used direct conduction. It would be a shame to let an accident of bureaucratic history make this engineering choice.

charts & graphs

son1 has written a post that continues the discussion I began around colors and data visualization, and I’m jealous of it for two reasons. First, I can’t believe I didn’t think of and claim that post title for myself, because it’s perfect.

Second, he does a much better job of getting to the heart of what I was trying to express: that a surprisingly large amount of data visualizations are both correct and question-begging. The choices made by the creator will inevitably influence which conclusions are drawn. That isn’t to malign the idea of graphs and charts and maps — at their best they are arguments that contain all component data, and whose accuracy can be easily checked. But they’re still arguments.

Perhaps all this stuff has been said before and better by Tufte, but those books are expensive, dammit.