« From the archives: The 13th Street Cycletrack - DC's first! | Main | New protected bike lanes planned for 6th NE, Brentwood Parkway and Pennsylvania west of the White House »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

If anything, all this is showing is the statistical shortcomings of the methodology, particularly when it comes to smaller numbers. We've known for a long time that at best, the ACS slightly undercounts cycling, and most likely undercounts by quite a bit. If a random sample of commuters in 2013 came up with 4.5% cyclists, I think the real number was more like 6%, and now it's showing a decline to 3.9% (a 13% decline in one year). I still think the commute share of bicyclists is higher, maybe 6.5% or 7% citywide, but the likelihood of error in the survey is such that it can show big variation year over year, even when none is actually occurring.

I agree with Will. Because the ACS sample sizes are small, the year-to-year fluctuations may or may not be valid. Over several years, however, the ACS data probably does reflect valid trends and locality differences.

I agree. There's a lot of noise in the data, and the plot at the Bike Portland site shows that.

Not to mention the explosion in bicycle sharing, particularly in Alexandria, it's hard to believe there's been a regression.

These changes are all in the margin of error, given that the standard deviation for each measurement is 0.3% and thus about 0.5% for a difference. So the 95% confidence interval would have about a 1% margin of error for a year-to-year difference.


I also like to look at people biking to jobs in DC (out of about 800,000 total commuters).

In 2014, they had 16,439 biking to jobs in DC, down from 17,816 in 2013. But the margin of error is 2,000 for each estimate, which means that the margin of error is about 2,800 for the difference. And it was only 16,736 in 2012.

Even the margin of error is not the absolute maximum possible deviation. It is itself a probability.

It's easy to see how the error in this survey would unfold. I didn't check to see how many responses they got, but a common number I used to use was 252, which, if chosen at random, was supposed to give you representative information on a 10,000 or more population (2.5% random sample of the total population). This is all well and good for the big trends, but when it comes to describing a small, but growing pattern, even around 5% - 10% of anything, the odds that you miss those 13 -25 people who represent that trend when you random sampled gets fairly high.

It's worse when you work in the methodology... did they call people at home on only 202 numbers? If so, they miss all the millenials who moved here with a celland haven't changed their number since high school. Did they send mail to the registered tenant? If so, they likely missed all the younger people living in group homes that aren't on a lease. Did they use the voter database to find contact info? If so, they probably miss college and grad school students, and other young people who are still registered in a home district elsewhere.

When you add it all up, you know there is a reasonable sized undercount on anything that is a numeric minority anyway, as well as anything where the methodology doesn't quite capture randomness perfectly. It all magnifies the error.

All that said, I think the best estimates are that some of the inner neighborhoods of DC have up to a 12% bike modeshare, while the less accessible have around 3%, but it ought to average out to around 6-8% in reality

@Will: The Bureau of Census knows how to conduct a survey and calculate the margin of error.

A sample of 300 would give a standard deviation of about 1%, right? So doesn't the reported margin of error imply a sample of 1000?

The comments to this entry are closed.

Banner design by creativecouchdesigns.com

City Paper's Best Local Bike Blog 2009

Categories

 Subscribe in a reader