I get a fair number of yuppie housewares catalogs in the mail. I browse through them–I actually do like the style of much of their merchandise–but rarely do I actually buy anything. The catalogs want to sell you on the idea that simply buying a decorative plate will transform your whole dining room into something as stylish as that put together for the catalog shoot, and I understand that it won’t.
Of all the yuppie housewares catalogs, NapaStyle is one of the yuppiest, to the point of almost being a laughable self-parody. But I’m writing here about something I bought from them (a NapaStyle exlcusive, even) that’s turned out to be quite a satisfying purchase: the Stemware & Plastic Baggie Dryer. I hate to throw away plastic Zip-Lock bags after just one use; far better to wash and re-use them. This device is a ring of eight wood rods that make excellent places to hang plastic baggies to dry.
Of course, one doesn’t need a drying rack to wash and re-use plastic baggies, but I wasn’t regularly doing so until I bought this drying rack. The drying rack works very well for its task. It’s also a very unassuming product: it does not need to have its own box: it was simply placed inside the shipping box. It was not enclosed in a plastic bag, it was not packed with custom-fit styrofoam. It was not tied to a piece of cardboard with twist-ties. It required no assembly. It came with no manual, no marketing survey disguised as a warranty card, and no safety warnings. It has no website. You can’t get on it’s email list for exciting product updates. It’s made almost entirely of wood. It was made in Canada.
I wish more products were like that.
May 11, 2009 3 Comments
The cherry blossoms have come and gone now: two weeks of blooming and four days at the peak. A few pictures of my son enjoying the blossoms made their appearance on the Matthew Picture of the Day. The blooms are the most dramatic signal of the arrival of spring: there are a handful of other plants that bloom one way or another before the cherry trees do, but the cherry trees go from bare branches to large masses of fluffy pinkish-white rather dramatically.
Now the blossoms have blown away, and trees of every type are getting their leaves, and for a week or so the trees are all decorated in Spring Green. I had known about the Crayola color Spring Green since childhood, but it wasn’t until I was living in Ithaca that, after a characteristically long winter, I really understood what it meant. The very light and yellowish green of the nascent leaves on the trees across the street from my apartment were Spring Green; it was finally spring.
So now we begin the six or seven months in which the foliage and blooms of the plants around us make the city beautiful. This is capped by a month or so of fall foliage, after which nature’s beautification fades, slowly, and the seasonal decoration takes over.
Between Thanksgiving and New Year’s, holiday lights make the otherwise bleak city beautiful. Strings of white lights outlining houses and filling in shrubs, some overdone, some very subtle: they compensate for the dwindling sunlight and dormant vegetation. In Ithaca, we got our first snow around Thanksgiving, here in DC it comes much later, usually in January. Snowfall is only very briefly beautiful, when it’s still piled up on otherwise bare branches, and while that on the ground hasn’t been disturbed very much. Then in a few hours, it drops from branches and twigs, and snowplows and other traffic have turned much of it into a dirty grey mush.
One thing I can’t understand is why it is that the holiday lights that made the streets seem so inviting in December look so tacky in the middle of January. The weather is the same, the hours of darkness are much the same, yet holiday lights, and the greens, golds, and reds of Christmas look fantastically out of place. I suppose we’ve been trained by the retail industry to appreciate bold reds and whites, à la Valentine’s day. Is there anyone who actually buys such seasonally-colored servingware from the yuppie housewares catalogs? And after Valentine’s day, as the dreary bleakness of winter presses on, we imagine spring in pastel colors. And then spring happens, like it’s happening now, here in DC.
April 18, 2009 1 Comment
I’ve been back from the 2009 APS March Meeting for two weeks now and so the window of relevance for writing about it is rapidly closing. It was held in Pittsburgh this year, following the same format as last year. The meeting seems to be getting bigger each year: when I first attended, in 2003, there were about 5600 attendees; this year’s meeting drew 7000.
For a number of years now I’ve taken the online Epitome and Invited Speaker List and run them through tcl scripts to make TeX files that give me speaker and session information in a format I think is more useful. This also allows me to look at overall meeting statistics: There were more sessions this year, 558, than in previous years; last year there were 517 sessions. What seems to be growing most sharply are invited talks: there were 825 this year, compared to about 730 in each of the previous three years. Not surprisingly, this corresponded to an increase in the number of sessions with 5 invited talks: there were 95 such sessions this year, about 75 in each of the previous 3 years, and only 15 back in 2005.
I only stayed through Wednesday of the meeting this year, taking an evening flight back home. I was rather irritated to find that the Cornell alumni reunion was held on Wednesday night, instead of Tuesday, like it always had been. I don’t know if this had even been published before I made my travel reservations, although I don’t know that I would have stayed an extra night just for that.
The disappearance of viewgraphs now appears complete. I was one of a handful of holdouts who was still using viewgraphs as late as 2006. Last year, I only saw one talk given using viewgraphs and this year I saw zero. There are still overhead projectors in the rooms, but they are kept on the floor beside the table upon which the computer projector sits. It’s amusing to read the note in the 2002 newsletter of the Division of Condensed Matter Physics:
More and more scientists want computer projection for their talks. This past year, computer projectors were available in invited session rooms only. Projectors are very expensive (~$400/ day/session) and would raise the registration fee at the conference significantly if placed in all rooms. Also, set-up time between talks makes staying on a 12 minute schedule for contributed talks very problematic. APS will continue to increase the availability of computer projection, but will not commit totally to them until price and technical interfacing problems become more tractable.
To be sure, there are problematic computers and I did see talks where roughly half of the time was taken up with computer fiddling.
On the topic of presentations, one thing that lots of speakers do, which really bugs me, is to show a graph of some raw data, usually a spectrum of something taken with a well-established experimental technique, but without giving any explanation. If I don’t use a technique myself, even if I know in principle how it works, I don’t know if it’s considered good or unexpected or interesting or disappointing if your graph has wiggles, or is flat, or has a bump in a particular place, or a big spike, or a big dip, or if it shifts a little as you twiddle some parameter, or shifts a lot. Context, my fellow physicists! Tell us what your measurement technique does, what shows up in your graph, what ordinary data would look like, and why your particular measurement is interesting.
I also ended my one-year physics-book-buying drought. I buy interesting physics books knowing that I’m not also buying the time it takes to work through them. I have one book purchase from two years ago that I’ve made a concerted effort to actually work through, but am perhaps only 20% done with it. And it’s not even a very challenging book. But I went ahead this year anyway, and took advantage of Cambridge University Press’s Wednesday afternoon buy-2-get-50%-off sale to pick up an otherwise ridiculously overpriced Elasticity with Mathematica and Geometric Algebra for Physicists, and also bought Group Theory: Applications to the Physics of Condensed Matter.
On to Portland
I’m looking forward to visiting Portland for next year’s March Meeting. I consider Portland one of my favorite cities but in reality all I’ve only spend several hours there at a time, waiting to change trains. But with a streetcar and Powell’s, who couldn’t love Portland? I had been sure that, a couple of years ago, I also saw Seattle on the list of upcoming March Meeting locations, but it seems to be gone now.
April 2, 2009 No Comments
Without the quadrant designation, several intersections in Washington–“6th and C,” for example–are ambiguous. “6th and C” can refer to a place in NW, NE, SW, or SE DC. Because of this duplication of streets and intersections, the quadrant is usually–but not always–specified. I’ve been curious for some time to know exactly how many doubly-, triply-, and quadruply-redundant intersections there are in DC, and it’s another fun example combining Mathematica 7‘s .shp file import with the GIS data that the DC government makes available.
How many are there? I calculate:
The 28 intersections that appear in all 4 quadrants are:
14th & D, 9th & G, 7th & I, 7th & E, 7th & G, 7th & D, 6th & C, 6th & G, 6th & D, 6th & I, 6th & E, 4th & M, 4th & G, 4th & E, 4th & D, 4th & I, 3rd & M, 3rd & C, 3rd & K, 3rd & D, 3rd & G, 3rd & E, 3rd & I, 2nd & E, 2nd & C, 1st & M, 1st & C, 1st & N
Plotted on a map:
Update: Here’s a larger PDF version.
Here’s how I made the map and did the calculations:
March 1, 2009 4 Comments
Tuesday, I joined the 1.8 million others on the Mall to watch the inauguration. I suppose I had underestimated just how important this event would be to the million or so people who got to the mall before I did. I had a good experience, and in retrospect a series of nearly random decisions that we made all turned out to be the right decisions.
A friend, in town and staying with us for the inauguration, and I left the house at 7 in the morning, somehow thinking that would be early enough. The station platform at the Brookland Metro was crowded when we arrived, and the inbound train that arrived was so full that only a handful more could board. We instead rode outbound one stop, thinking that either we’d catch a slightly less full train, or could get on one of the special presidential bus routes. Miraculously, there was just enough space for us on the next inbound train, which otherwise looked as crowded as the first.
We managed to meet a bunch of my friend’s friends downtown, at their hotel, at 8. We were north of the mall. Since all of the parade route along Pennsylvania Avenue from the Capitol to the White House was walled off, we had to choose a route south. Among the many roads closed to motor traffic that day was 395, the freeway in a tunnel that goes under the Capitol reflecting pool. It was open as a pedestrian route, and in part because of the novelty of walking it, that’s were we went. Later, we heard that the handful of crossovers through the parade route became impossibly backed up. We entered at about 3rd and G NW and exited at E Street SW. We made our way along E, then up 7th to D, then along D to 14th, where we crossed into the Mall, finding spots at the base of the Washington monument by about 9am.
And then we waited, in the cold. A half hour of relative stillness was enough to take away the warmth that physical movement generates, and I was reminded how much colder it is when you’re not moving. I knew this, but hadn’t really thought about how long we’d be out there. As was widely recommended, I had layered, and soon wished I had layered more. Perhaps what one should do is to subtract ten degrees Fahrenheit from the actual temperature for every hour you’ll be standing in the cold.
We had a decent view of a Jumbotron, and we could see the Capitol, but even with binoculars I couldn’t see any individuals. (Our view was partially blocked by trees.) What was significant was to be a part of the 1.8 million assembled, and of the 1.12 million Metro rides that day. It occured to me, standing there, that one had to understand what was going on in order to get anything out of the experience. I was glad that my son was safely and warmly at home; to be able to tell him in later years that he was there would have been a useless gimmick.
So what did I understand the experience to be? On the one hand, under the former administration, I (in chronological order): earned my Ph.D., got a permanent research job, got married, bought a house, had a child. Am I better off than I was eight years ago? Unquestionably, yes. Not bad for living a few miles from the epicenter of what was quite possibly the worst administration in United States history. But of course I was out there in the cold, celebrating with everyone else.
The president has called upon Americans to rise up to our nation’s challenges. He is not the first to call for service to our country and to others, but unlike the mawkishly sanctimonious appeals of virtually every other public leader, his carry a sincerity and gravitas that is convincing. This is perhaps because of his extensive service as a community organizer. As hope and competence replace fear and cronyism as dominant motifs in the administration, I expect that the appeal of satire and detached irony will fade in popular culture: these are for those who are comfortable, detached, and powerless. Anger and self-absorbed drama are out; hard work and careful planning are in. Does this signal an ascendancy for The University of Chicago? The President taught there, of course. David Axelrod, chief campaign stategist, went there, as did Nate Silver, whose reality-based analysis beat all the other pundits this time around.
Four years ago, the progressive blogosphere wanted everyone to read Don’t Think of an Elephant, and now we have a progressive President that has intuitively understood framing and the use of language longer and better than anyone else on our side. Democrats should now be done with Triangulation. And Mark Penn and Terry McAuliffe, too.
On energy and environment and global warming and scores of other policies, Obama can already be said to be turning things around. Although Obama is not the first to mention science in his inaugural address, his reversal of the previous administration’s anti-scientific outlook is heartening. Perhaps our nation can be done with the celebration of anti-intellectualism. Perhaps patriotism will no longer be associated with flag-waving belligerence, but will be understood instead to be a celebration of those extraordinary characteristics of our nation that made our President’s story possible.
This was my hope, as I stood in the cold on the National Mall last Tuesday.
January 27, 2009 1 Comment
In the end, I went with the upgrade to Mathematica 7. Of all the new features, the one that really hooked me–which is comparatively minor, compared to all the other new features–is the ability to import SHP files. The importation is not terribly well documented nor is there much additional support, but it was pretty easy to do a few nifty things with the DC Street Centerline file.
As you may know, there is a street in DC for every state in the union. Pennsylvania Avenue is probably the most famous of these; the White House sits at 1600 Pennsylvania Ave NW. I used to live on Massachusetts Avenue. So my first idea was to make a street map of DC in which the state-named streets were colored red-ish or blue-ish depending on their vote in the recent election.
Here it is:
Read on to see how I made it:
January 17, 2009 1 Comment
Parts 1 and 2 of this series looked at the public side of the DC Alternatives Analysis process that took place between 2002 and 2005. Several newsletters were published, public meetings were held, and the study team met with civic groups and maintained a presence at various community events. The widely distributed documents only tell a small fraction of the story, and if one wants to understand why the final report had such disappointing recommendations, one needs to delve into the more technical study documents, which weren’t widely distributed. The contrast between that which was published publicly and the technical documents kept internally is instructive for anyone following a similar engineering study of similar scale.
Broadly speaking, these technical documents attempt to quantify the decision-making process in order that every subsequent decision have justification. The process obscures the study biases by shifting them into the methods of quantification, and ultimately confuses quantifiability with importance.
Setting the Stage
The formal program of the study was documented in the Project Work Plan, in January 2003. One of the first of the study documents was the short Quality Assurance Program, an eight-pager released in November 2003. It establishes the tedious tone in which all further study documents will be written with empty management-speak such as “All DMJM Harris staff performing tasks on the project will utilize the appropriate implementing procedure for the work being performed.”
Two reports were finished in August 2004: The Needs Assessment and the Evaluation Framework. These followed the extensive series of community meetings in late 2003. The Needs Assessment was the only technical document that was published on the (now-defunct) study website. It examined population, employment, and overall destination patterns across the city in relation to existing transit service. The Evaluation Framework brought together all the input–from DC agencies and from the community–about routes and goals and needs and defines what sort of analysis is to be done. A structure of seven routes is proposed, two of which have alternative routings, but the stops along those routes are not defined yet. The project goals are laid out, and the measures and criteria used to evaluate choices in terms of those goals are defined. The general work plan for several documents that follow is laid out.
Route and Mode evaluation:
Screen 1, released September 2004, evaluates seven potential transit modes (streetcars, “bus rapid transit,” light rail transit, diesel multiple units, automated guideway transit, monorail, and heavy rail), and ends up recommending only streetcars and “bus rapid transit” for further evaluation.
The Definition of Alternatives, released in November 2004, analyzed the routes given in the Evaluation Framework for the two chosen modes. Station locations were assigned and propulsion technologies are considered. For each route, a “service plan” was developed, including the headways between successive runs and calculations for route travel times. Although there are separate calculations for streetcars and for “bus rapid transit,” no details are given about the assumptions that went into the calculation of the travel times.
Screen 2, released March 2005, takes the service plans and route structure from the Definition of Alternatives and tries to evaluate how well each would fulfill the project goals by applying a set of “Measures of Effectiveness,” which are defined in the Evaluation Framework. Claiming that “the operational characteristics of BRT and Streetcar are similar at the level of detail” under study, it lumps both into a “premium transit service option” to decide whether a particular corridor should have “premium transit,” or whether it should only receive some bus service enhancements. Corridors were ranked (high, medium or low) based on a few criteria for each of the four project goals, leading to a composite score. Further analysis on meeting corridor deficiencies and operational considerations, and concluded by recommending some routes for “premium transit” and relegating some to get only “local bus enhancements.”
Screen 3, released May 2005, takes the “premium transit” corridors of Screen 2 and applies further “Measures of Effectiveness” to determine whether each corridor should be Streetcars or “bus rapid transit.” Each corridor is broken into segements, and the effectiveness criteria are applied to the segments individually. Where applicable–which isn’t as frequently as one might think–Streetcars and “bus rapid transit” are evaluated separately. The scores from these evaluations are totaled, to come up with proposals for streetcar routes, “bus rapid transit” routes, and “rapid bus” routes.
Further study documents, released May–September 2005, looked at the finances of the proposed system and put forward the timetable. All the study findings were summarized in a final report, published in October 2005. Future posts in this series will look in detail at some of these technical reports.
December 15, 2008 No Comments
I keep a permanent page on this website listing the books that my son has, in order to minimize duplicate copies. I have just now updated this list to include books acquired over the past year.
He does also have a number of books in Korean, which are not presently listed.
December 4, 2008 No Comments
To me, an intermediate and somewhat casual Mathematica user, the news that Mathematica 7 had been released was a surprise. Surprising to me because Mathematica usually goes much longer between major-digit releases; I would have anticipated this to be Version 6.1. For fun, I’ve plotted the history of Mathematica versions1 :
Mathematica 6 was a substantial upgrade: the graphics system was completely overhauled, the curated data, that I’ve used as the basis for some posts here, was added, and the ability for dynamic interactivity was added with
I am not, of course, a major Mathematica user. In fact, although I’m a physicist, I haven’t made tremendously much use of Mathematica for my professional work. This is partly because I tend to deal with relatively small data sets, for which a GUI-based data analysis tool is usually easier to work with than the command-line Mathematica. And I’d consider myself an advanced user of Pro Fit, the data analysis tool that’s made all the graphs for all the work I’ve done since about 1998.
In fact, my Mathematica license is my own personal one. As a graduate student, I bought the Student version of Mathematica, which they allow you to upgrade to a full professional license for only a few hundred dollars, compared to the $2500 list price of a new professional license.
Wolfram really wants its users to buy Premier Service, a several hundred dollars per year service which entitles you to all upgrades, major and minor. If you don’t buy premier service, then you need to pay for all upgrades, even the N.M.X to N.M.X+1 minor bug-fixing upgrades. And without premier service, you’re not even supposed to install Mathematica on more than one computer. Draconian and greedy, if you ask me, but they can do that, because they’re Wolfram. And for tech-heavy firms that make heavy use of Mathematica and get millions of dollars worth of value from whatever they compute in Mathematica, it makes sense. But it makes it very difficult to be a casual user.
And even though your existing copy can do everything it could the day you bought it, once the difference between your copy and the current release gets large enough, there is no longer an upgrade path. I think this is one of the motivations to release this as version 7 and not 6.1: I don’t recall the precise figure, but Wolfram generally offers an upgrade path only for jumps smaller than 1.5. If this is still the case,2 what this does is cut off anyone who hadn’t upgraded to version 6. Update: enough with the conspiracy theories! Wolfram clears up the upgrade policy in the comments.
In my case, with Version 6.0.1, I have a choice of paying $750, and getting a year of Premier Service, or paying $500 for just version 7.0.0 with no service. Out of my own pocket, ouch! But what makes it really enticing, for me, is that Mathematica now reads SHP files. These are the Geographic Information System data files, promulgated by ESRI, in which vector-valued geographic data is commonly exchanged. In particular, the DC Office of Planning makes an amazingly large collection of DC GIS data available in SHP format. The possibility for quantitative analysis of DC mapping data is very tantalizing.
Of course, Wolfram wouldn’t release a major number upgrade without hundreds of other new features. As of yet, there isn’t much substantial written about version 7. I did find some notes from a beta-tester and from a college math teacher. I’ll probably buy it, even though it would mean delaying other expensive toys that I want.
November 22, 2008 3 Comments
I needed to buy more gasoline for the car today, and I decided to see how long it took to fill the tank. I bought ten and a half gallons of gas, and it took 70 seconds to fill it up. Although filling up a gas tank is something that millions of Americans do every day, it’s really remarkable when you stop and think about the energy transfer going on.
Gasoline has, approximately, 113,000 BTUs per gallon.1 One BTU is 1055 Joules. So I transferred 1.25 Billion Joules in those 70 seconds, which is a rate of 17.9 megawatts. When you consider that you spend less than two minutes pumping the same amount of energy you burn in four hours of driving, it’s not surprising that you end up with such a high power. What’s more interesting, I think, is to contemplate the rather fundamental limits this puts on plug-in electric cars.
Internal combustion engines, according to Wikipedia, are only about 20% efficient, which is to say, for every 100 BTUs of thermal energy consumed by the engine, you get 20 BTUs of mechanical energy out. This is, in large part, a consequence of fundamental thermodynamics. Although electric motors can be pretty close to perfectly efficient, a similar thermal-to-electric efficiency hit would be taken at the power plant.
Let’s consider, then, that we want a similar car to mine, but electric. Instead of 1.25 gigajoules, we need to have 250 megajoules. Battery charging can be pretty efficient, at 90% or so, which means we’d supply 280 megajoules. If we expect the filling-up time to be comparable to that of gasoline cars–call it 100 seconds for simplicity–then we’d need to supply 2.8 megawatts of power. At 240 Volts, which is the voltage we get in our homes, this would require 11700 amps; if you used 1000 Volts, it would take 2800 amps. Although equipment exists2 to handle these voltage and current levels, it is an understatement to say that it cannot be handled as casually as gasoline pumps are handled. Nor is it clear that any battery system would actually be able to accept this much power.
A linear relationship exists between the power requirement for filling, and the vehicle range, the vehicle power, and the time for a filling. If you’re satisfied with half the range of a regular vehicle, for example, you could use half the filling power. Let’s imagine that you’d be happy for the filling to take ten times as long as with gasoline, or 1000 seconds, just under 17 minutes. At this level, you’d need 280 kilowatts of power. If battery charging is 90% efficient, that means 10% of the power is going to be dissipated as heat, which in this case would be 28 kilowatts.
For comparison, a typical energy consumption rate for a home furnace is 100,000 BTU per hour, about 28 BTU per second, or 29.3 kilowatts. Which means that the waste heat dissipated during charging for the example–of a 1000 second fill for a vehicle with similar range and power as a modest gasoline powered sedan, at 90% charging efficiency–is as much as the entire output of a home furnace.
No wonder overnight charges are the standard.
November 13, 2008 4 Comments