# Playing with models: quantitative exploration of life.

## Size matters

Posted in Economics, Statistics by Alexander Lobkovsky Meitiv on October 8, 2018

My teenage son who is currently into cars pointed out to me that the grill emblems have been steadily increasing in size as can be readily seen in the side by side comparison the 1992 and 2018 versions of the Honda Accord.

Being a data geek, I immediately wanted to know if there is a rational basis for the design choices car manufacturers seem to be making en masse. I needed data. Sales data was easy to obtain at Car Sales Base. The emblem size was harder to find and I ended up finding photos of cars taken head on and measuring the size of the grill emblem relative to the car width. To keep this undertaking manageable and fun, I focused on three Japanese mid-size cars that compete directly against each other: Toyota Camry, Honda Accord and Nissan Altima. The yearly unit sales and the relative grill logo sizes are plotted in the charts below.

Yearly US unit sales of three mid-size auto models that compete directly against each other. Note the late-2000’s housing bubble recession effect.

The grill emblem size relative to the width of the vehicle vs. the model year.

Its virtually impossible to tell from the plots of the sales and emblem sizes whether there is any association between the two. However, if we plot these data in a different way it becomes apparent. In the plot below, each point represents a pair of car models for a particular year. The x-coordinate of the point is the ratio of the unit sales for that year while the y-coordinate is the ratio of the emblem sizes for that year. If there was a model change that year, I took the average emblem size. The solid line has slope of 0.356 and the P-value of this kind of correlation occurring at random is 0.002

Each symbol represents a pair of car models in a particular year. The x-coordinate of each symbol is the ratio of the number of unit sales in that year and the y-coordinate is the ratio of the relative grill emblem sizes. The solid line is a linear fit.

We found a highly significant, moderate correlation between the emblem size and sales. But as any scientist repeats as a mantra: “correlation is not causation.” For example, a change in the emblem size almost always coincides with the change in the model of the vehicle. It’s impossible to disentangle the effect of the emblem size from the effect of the new model on the sales. At least some of the effect has to be due to the emblem size because it stayed relatively constant over several model changes.

In the end, alas, one would have to gather a richer and larger dataset to say with any certainty whether the emblem size influences sales.

## Money plus endorsements equals votes

Posted in General by Alexander Lobkovsky Meitiv on August 20, 2018

Montgomery county in Maryland where I live is one of the largest and wealthiest counties in the US. It is, however, not immune to the sense of dissatisfaction with  business as usual in the government.  Last year, a ballot initiative (which passed by a landslide), introduced and championed by conservatives, limited the number of consecutive terms a council-member could serve.    As a result, the nominations for three out of four at-large seats on the county council were up for grabs in this year’s primary.  The county council is a 9-person legislative body that makes important county wide decisions, most notably allocating the $5.5 billion budget. That’s a lot of power. It’s therefore not a surprise that this year an unprecedented 33 candidates ran for the 4 at large seats, including my spouse, Danielle Meitiv. The complete election results can be found on the Maryland Board of Elections website. A cursory look at the results and the several high profile such as Washington Post and the MCEA, apple ballot suggests that these endorsements may have been a key factor determining the outcome of the elections. Being a data scientist, I wanted to go further than a cursory look and a explore the results of a rigorous regression analysis. The dataset for the analysis was constructed by combining data from three different sources. 1. The Maryland Board of Elections for the final vote count 2. The Seventh State blog for the endorsements 3. Maryland Campaign Reporting Information System for the financial information and the number of times a candidate ran before in Montgomery county The dataset assembled it was straightforward to use R to construct and run linear regression models. A linear model assumes that all factors are independent and their effect on the total number of votes obtained is additive. I started with a model which had all factors in it and proceeded to remove insignificant (P-value < 0.05) factors one at a time rerunning the model after every removal until all factors were significant. The significant influence factors that remained at the end of the factor elimination process are summarized in the table below  Factor Votes Baseline 2,475 votes in absence of other factors Money raised 47 votes per$1K raised Washington Post 14,200 votes MCEA 14,600 votes SEIU500 7,340 votes Gender (M/F) 4,150 for F, 0 for M

The main conclusion of the analysis is the importance of the Washington Post endorsement and the Apple Ballot. The model seems to suggest that bulk of the votes of the top 7 vote-getters came from these endorsements. Another interesting conclusion is the Male gender liability or Female gender advantage). This much talked about Trump era effect is perhaps less pronounced in the MoCo election than in other races, but nevertheless measurable.

One has to be cautions interpreting the results of simple regression models. The fundamental assumption in computing the significance of factors is the statistical independence of the factor effects. This is, of course, not true. The Washington Post endorsement probably helped with fundraising or vice versa, WaPo endorsed the clear fundraising front-runners. Another thorny issue is causality. Did the endorsements influence the outcome of the elections or did the endorsing organisation simply read the tea leaves better and predicted the winners?

If you would like to run your own models–here is the MoCoCouncilAtLargeVoteData.

## Inspections, fines and the honor system

Posted in Economics, Transportation by Alexander Lobkovsky Meitiv on September 25, 2015

Alexanderplatz U-Bahn station in Berlin

This summer we visited Berlin and enjoyed its excellent public transportation system.  Unlike the controlled access system prevalent in the US, Berlin’s (and Prague’s for that matter) system is an honor system.  You can buy a ticket at any time and when you board a train or a bus you stamp the ticket with the date and time the trip started.  Compliance is enforced by periodic inspections and the fine for riding without a validated ticket (Schwarzfahren) is a modest €60.  During our 8 days in Berlin we saw inspectors once and did not get inspected ourselves.

Given the ticket price, is the combination of the fine size and inspection frequency sensible?  That is, could BVG (the transit authority) save money by changing the fine amount and/or number of inspectors it has to hire?

To answer this question we must know how people decide whether to buy a ticket or risk being fined.  Many factors come into play here.  A purely rational and amoral consumer would buy the ticket if the product of the inspection probability and fine amount is larger than the ticket price.  This way, not buying tickets costs more in the long run.  However, there is a strong national pride and the concept of Beitragen (contributing) which keeps Schwarzfahren low (just a few percent according to this article in German).  Being caught without a valid ticket may also result not just in a fine but also in a court summons which presumably has a higher deterrent power.

Let’s explore the effect of the frequency of inspections (and the associated cost) and the fine amount on the total amount of money collected. I will define the single ride ticket cost to be 1 and the fine amount to be $F$.  To keep the formalism simple, I will neglect the reality that the fine for repeat offenders is higher.  This fact reduces Schwarzfahren but makes calculations more complicated. In the long run, the amount of the fine for repeat offenses is the important quantity.

I will assume for simplicity that the population of transit rides consists of two distinct pools. The conscientious riders always buy the ticket (population size $N$). The rest of the riders (population size $M$) are rational fare evaders and buy tickets when the inspection probability per ride $p$ is greater than $1/F$. This assumption requires the ability to accurately measure the frequency of inspections either from personal observations or from those of friends and family. The last ingredient in the recipe for computing the revenue of the system is the cost of inspections $c p (N + M),$ where $c$ is a new parameter which can be thought of as the cost per inspection.

Within our simplistic model, given the fine amount $F$, if the inspection probability is $p < 1/F$, the evaders do not buy tickets and if $p > 1/F$ the potential evaders always buy tickets. Because the cost of inspections is linear with $p$, there are two local maxima of the revenue. Either $p = 0$ in which case the revenue is simply $N$ (i.e. there are no inspectors and the evaders ride free), or $p = 1/F$ in which case the evaders buy the tickets the total revenue is $(N + M)(1 - c/F)$. Therefore, if the cost parameter $c$ is greater than the product of the fine amount and the fraction of fare avoiders

$\displaystyle c > F \frac {M}{N + M},$

it does not make sense to inspect. Another way to look at this equation is that the fine amount has to be at least the cost per inspection divided by the fraction of evaders to justify inspections.

Of course the situation in real life is different. Most people are opportunistic evaders. Each person is characterized by a risk aversion parameter which balances the financial benefit of evasion with the emotional toll of being caught. Because of this fact, some level of inspection is always necessary. The level itself depends on the distribution of the risk aversion parameter in the population and the recovery rate of the fines as a function of the fine amount. But that’s for another post.

## Debunking the “Keep the AC on while away” myth

Posted in Economics by Alexander Lobkovsky Meitiv on September 22, 2015

We all heard the supposedly common wisdom of leaving the AC (or heat) on in the house when away on a trip. This “wisdom” is perpetuated by the HVAC professionals themselves. Well, it’s wrong, at least under the simplifying assumptions below. You will always save energy and money by turning the heat/AC off when you go on a trip, no matter how short.

To quantitatively compute the difference in the money/electricity for the two scenarios: 1) maintain house temperature, and 2) turn the heat/AC off while away we will need to know the cost of either removing or adding the amount of heat $Q$ from/to the house. I will assume that the cost is simply proportional to $Q$. This is probably a good assumption for combustion heating systems but may not be very accurate for heat pumps whose efficiency depends on the temperature difference between the inside and outside. Let me deal with this complication later.

Let me assume that the house has heat capacity $C$ so that the amount of heat $Q$ that enters or leaves the house to cause the temperature change $\Delta T$ is $Q = C \Delta T.$ Let me also assume that total heat conductance of the house’s walls is $K$. This heat conductance is the coefficient of proportionality between the outside-inside temperature difference and the total heat that flows into/out of the house per unit time: $dQ/dt = -K \Delta T.$

Therefore, under scenario 1 (keep heat/AC on), if the duration of the trip is $\tau$, the total heat that needs to be pumped into/out of the house is $Q_1 = K\tau |\Delta T|$, where $|\Delta T|$ is the mean difference between the indoor and outdoor temperatures.

When heat/AC is off, the house temperature will approach the mean outdoor temperature exponentially with the charateristic time scale $C/K$. Let’s assume that the heat/AC system is so powerful that the heat is removed/added quickly when you arrive from the trip so that the extra losses during the cooling/heating period are small compared to the total amount of heat to be removed/added. Then, the total amount of heat to be removed/added after the trip is $Q_2 = (1 - e^{-K\tau/C}) C |\Delta T|.$

Comparing the two amounts of heat we see that because $x > 1 - e^{-x}$,

$Q_1 > Q_2,$

In other words, the amount of heat that must be removed/added from/to the house is always greater when you keep the system on during the trip. Therefore, unless there are counter indications like the pets dying or pipes freezing, it is better to turn the heat/AC off when you travel or at least change the thermostat to a setting that is closer to the outdoor temperature.

## The optimal strategy for setting the price of event tickets

Posted in Economics by Alexander Lobkovsky Meitiv on September 10, 2015

I love tennis and always visit the ATP tournament in DC in August. This year I went with a friend who bought the ticket at the last moment online and discovered that it was deeply discounted (almost 50%). That fact coupled with the observation that the stadium as less than half full got me thinking: are the organizers employing the best ticket price strategy to maximize the receipts?

To determine the optimal ticket pricing strategy we need to know something about how people make buying decisions.  There are a number of different ways of describing this decision making process.  Here I will make several simplifying assumptions to make the problem analytically tractable.  These assumptions can be relaxed in practice albeit at a cost of having more complex equations.

In the simplest formulation, each person has an internal valuation $p$ of the ticket.  If the actual price $q$ is higher, this person does not buy the ticket. If the price is lower, this person buys the ticket at a rate (probability per unit time) proportional to the difference between the valuation and the price. The coefficient of proportionality is $1/\tau$, where $\tau$ is the characteristic time it takes to buy the ticket if the difference between the price and valuation is unity.  This $\tau$ can be called the customer response time.   I will make two simplifying assumptions: 1) $\tau$ does not change with time. This is not true in practice as people tend to make snappier buying decisions closer to the event; and 2) every person has the same $\tau$. Both assumptions can be relaxed by considering the response time that varies with time and is different for every individual.

The final ingredient in describing the population of people is the distribution of ticket valuations.  If the only information we have about this population is its size $L$ (total number of people potentially interested in the event) and the mean ticket valuation $p_0$, the number of people with valuations in a narrow interval between $p$ and $p+dp$ is $L/p_0 e^{-p/p_0} \, dp$, i.e. the distribution of valuations is exponential with a variance and mean equal to $p_0$.

Let me first consider the case of a fixed ticket price $q_0$. Also suppose that tickets go on sale some time $T$ before the event. The probability that a person with internal valuation $p > q_0$ buys the ticket during this time $T$ is

$\displaystyle{\mathcal{P}(p,T) = 1 - e^{-(p - q_0)T/\tau}}$.

Therefore, the expected number of tickets bought is

$\displaystyle N = \frac{L}{p_0} \int_{q_0}^\infty dp \, e^{-p/p_0} \, \mathcal{P}(p,T) = L\, e^{-q_0/p_0} \, \frac{p_0 T}{p_0 T + \tau}. \quad\quad\quad\quad (1)$

The gross receipts $N q_0$ (because everyone pays the same price $q_0$) are maximized when $q_0 = p_0$, i.e. when the price matches the mean valuation among the interested people. This is the best fixed price strategy when the expected number of tickets sold at price $q_0 = p_0$ is smaller than the capacity of the event. When the event capacity is smaller than the expected number of ticket that will sell at price $q_0 = p_0$, the price can be set higher to such a value that the expected number of sold tickets is equal to the event capacity.

The main conclusion is that when the ticket price is fixed and the event capacity is greater than $\displaystyle{\frac{L}{e}\frac{p_0 T}{p_0 T + \tau}}$, maximum gross receipts are achieved when the event is partially sold out.

The population of people potentially interested in the event is characterized by three parameters: 1) size $L$, 2) mean ticket valuation $p_0$, and 3) response time $\tau$. All of these parameters can be estimated by fitting the number of tickets sold as a function of time to the formula in Equation (1).

Having explained why partially sold out events could be the outcome of a rational pricing strategy we could call it a day and quit. However, pushing a little further turns out to yield something unexpected.

Let’s relax the assumption of a fixed ticket price. A declining ticket price which starts out quite high makes sense because people with a high ticket valuation will have likely bought a ticket early on and therefore more money can be made by expanding the pool of people who will buy a ticket at a lower price as the time of the event nears.

Let’s consider a linearly declining ticket price which starts high at $q_1$ when tickets first go on sale and ends up at $q_2 < q_1$ at the of the event. I will not include the formulas here as they are pretty messy. However, as shown in the figures below there is something interesting, namely a discontinuous transition.

To be clear, the results below are for the case when the event capacity is large enough so that the best strategy is not to sell out. It turns out that when the ticket sale duration $T$ is shorter than the characteristic customer response time $\tau$, fixed ticket price is the best strategy. However as $T$ becomes larger than $\tau$, the linearly declining price is better. To achieve the best result, the initial price has to be quite a bit larger and the final price quite a bit lower than the average ticket valuation. Therefore a careful measurement of $\tau$ is required to pick the best pricing strategy (fixed price vs. linearly declining price) because mid-flight switching between strategies is not feasible.

In real life, each person’s ticket valuation is not a constant and can depend on a lot of factors such as how soon the event is, promotions and the number of tickets already sold. A full model which takes all these factors into account would have to be constructed if precise quantitative predictions are to be made.

Gross receipts for two different pricing strategies as a function of the time $T$ before the event tickets go in sale. When $T$ becomes larger than the customer response time $\tau$, it is better to employ the declining price strategy. The mean ticket valuation for this graph is $p_0 = 2.$

When the ticket sale duration $T$ is larger than the customer response time $\tau,$ linearly declining price strategy is better than the fixed price strategy. The starting (high; red) and ending (low; green) prices are shown as a function of the sale duration.

## Money does buy Olympic medals … squared!

Posted in Statistics by Alexander Lobkovsky Meitiv on September 20, 2012

As I was watching the Jamaican sprinters sweep the 200m dash, I started wondering how such a relatively small and not wealthy by any stretch of imagination country could achieve such dominance.  Is there a correlation between the population of the country and its haul of medals?  Almost certainly.  Perhaps a more incendiary and more interesting question is “does money buy medals?”  Not in a literal sense, of course, but in a statistical sense.  Is there a correlation between the per-capita medal count and per-capita income?  There should be.  Money buys equipment, coaching and medical staff, transportation, etc.

As I embarked on this project, I expected to find a significant positive correlation.  But what I found was even more shocking.  Medal count per person grows as the square of per-capita income.  The graph below shows the medal count (obtained from http://www.london2012.com  and a Wikipedia article) divided by the population of each country (obtained from Wikipedia) vs. the purchasing parity GDP per capita (obtained from the CIA column of this Wikipedia article).  Only populous (>50,000,000) countries are included since statistical trends are more clear in large samples and the fluctuations that obscure these trends are smaller.  The straight line is the quadratic fit.

Why does the medal haul grow faster than linearly with the resources?  I have an explanation for this striking phenomenon which assumes that each sport has an entrance threshold $s$ and that the distribution of these entrance thresholds is roughly uniform. If some contry has a GDP per capita that is greater than the entrance threshold for a particular sport, it enters competition. It follows that the number of competitors is inversly proportional to the entrance threshold of a sport. I further assume that all competitors are equally likely to get a medal once they enter competition. Therefore the number of medals each competitor wins is inversely proportional to the number of competitors and consequently it is proportional to the entrance threshold of the sport. The final logical step is to notice that a country with per-capita income $s_0$ competes in all sports whose entrance thresholds are $\le s_0.$ Thus the total number of medal is proportional to

$\displaystyle \int_0^{s_0} s\, ds \sim s_0^2.$

Thus the quadratic dependence of the medal haul on the GDP per capita comes from the fact that richer contries enter more sports and it is easier to win medals in more expensive sports since not as many countries can enter.

Money does buy Olympic medals, Squared! (the green line has slope 2)
Only countries with population greatern than 50 million are included in this plot. Ethiopia is a particularly striking outlier winning more than two orders of magnitude more medals than predicted by the green line.

## Immunization in the age of intercontinental travel

Posted in Game theory, Statistics by Alexander Lobkovsky Meitiv on January 18, 2011

Face masks curtail the spread of the virus during the flu pandemic.

If you have kids in school, you are familiar with how fervently the school administrators enforce the 100% immunization policy. The schools are complying with the local laws which grant exceptions grudgingly. Childhood immunization is a powerful tool against a variety of crippling viruses some of which are extinct (outside the controlled lab environment) as a result of widespread immunizations. Controversy over the possibly harmful side effects (mercury, other preservatives) notwithstanding, is the zeal for reaching the 100% immunization rate justified?

The efficacy of a vaccination program is quantified by the fraction of the non-immunized population that gets sick in an epidemic. This fraction can also be thought as the probability that a particular individual will get sick in an epidemic.

A number of factors determine the probability of infection in an epidemic:

How long is the sick person contagious? What is the probability of infection given the contact with a sick person? What is the average rate of inter-personal contacts? How far does a person travel during the sickness? The answers to these questions depend on the type of virus, and the properties of the population such as its density and the patterns of movement.

The situation seems too complex for predictive modeling. Could a simplified model offer meaningful insight? Yes, if we pick a narrow aspect of the problem to look at. How about this? You probably heard the doomsday scenarios of a deadly virus spread around the world aboard airplanes. Is a this kind of talk just fear-mongering or a realistic prediction?

Let us construct a model to study whether the doomsday scenario is plausible. Let’s start with a 2D square lattice, or a board, whose sites (spaces) can be empty of occupied by “people” — let’s call them “entities.” The entities could be in three states: immune, vulnerable, and sick. The sick entities can infect the vulnerable but not the immune ones. We need to decide what to do with the sick entities. For example, some fraction of them can “die”–be removed from the board. The simplest thing is to just let them become immune after the disease has run its course. This is what is done in our model.

The entities can move around the board. The movement models the short range everyday movement of the population: commute, shopping, going to and from school, etc. I will use a turn based (like the Conway’s game of life) set of movement rules that are often used in simulating fluid-vapor interfaces. The result is a collection of dense clusters of varying sizes that float in a sparsely inhabited sea. There is little exchange of entities between the clusters. Since the infection is acquired on contact, global epidemics are impeded by the limited inter-cluster movement. One could think of these semi-isolated clusters as communities, cities, or even continents depending on your perspective.

Below is the movie of the model simulation in which the sick entities (red) infect the vulnerable entities (blue) and after a while become immune (green). A fraction of the population is already immune at the onset of the epidemic. Observe how the disease propagates quickly across the clusters and makes infrequent jumps between the clusters. In this particular simulation, 37% of the vulnerable population got sick before the epidemic fizzled out.

You probably noticed that the immunization rate in the above example is rather low, 30% to be exact. Since most entities are vulnerable, the epidemic has no trouble spreading. When the immunization rate is more than doubled to 70%, most epidemics fizzle out early. As you can see in the PDF (probability density function) plot below of the total epidemic size (defined as the fraction of the vulnerable population that go sick), all epidemics involve fewer than 4% of the populace. There is simply not enough population movement for the disease to spread.

When the movement is local and the immunization rate is high, most epidemics fizzle out without affecting many people.

Time to include airplanes and examine the plausibility of the doomsday scenario!

In addition to the short range movement, let’s allow at each turn a certain small fraction of the population to move anywhere on the board. The second graph below is the PDF of the epidemic size for the same parameters as the one above, but with the additional 5% of the population executing large scale movement each turn. Notice the radical change in the scale of the x axis. When a small fraction of the population travels long distances each turn, most epidemics grow to encompass the majority of the population. The bimodal nature of the epidemic size distribution suggests that there is a threshold size. If the epidemic hits a cluster that happens to be larger than the threshold, the disease can escape and infect almost all other clusters.

When only 5% of the population execute large scale movement every turn, most epidemics grow to affect a significant fraction of the population.

Let us now quantitatively examine the effect of the large scale movements on the probability of significant epidemics. In the graph below I will plot the probability of occurrence of an epidemic that involves > 10% of the vulnerable populace as a function of the immunization rate for two different magnitudes of the large scale movement. Significant epidemics become rare as the immunization rate increases. However, perhaps not surprisingly, greater immunization rate is required to avoid epidemics for a larger magnitude of large scale population movement.

Greater large scale movement requires a higher immunization rate to avoid a significant epidemic

Predicting how epidemics spread in the real world is a tricky business. However, the general conclusion of the simple model, I think, will stand. While 100% immunization rate is not strictly required to stem epidemics, as the extent of long distance travel increases, we will need a higher immunization rate. It would be unwise to be lax about immunization requirements to discover one day that not enough of the population is immunized.

The real issue, I think, is that the small fraction of people who refuse to be immunized are shielded from infection by those who took the risk of immunization (albeit a small risk). But that is a can of worms, I don’t really want to open…

## The waiting game.

Posted in Game theory, Transportation by Alexander Lobkovsky Meitiv on December 20, 2010

When two buses can take you where you need to go should you let the slow bus pass and wait for the fast bus?

The Metro has changed the bus schedule this morning with no prior notice whatsoever. The 9 am J1 bus, I usually take, did not come. As I found out later, it was dropped from the schedule altogether. For about 20 minutes I was assuming (with diminishing conviction) that the J1 was merely late. While waiting for the J1, I let two J2’s go by. The J2’s take about 15 minutes longer to reach my destination. While I was waiting, I began to wonder what the best waiting strategy would be if there were two modes of transport with different travel times and different service frequencies. There is a math problem in there with a clear cut answer.

It is simpler to consider the situation in which buses do not have a fixed schedule but arrive at a fixed rate per unit time $\mu$. Intervals between consecutive buses in this situation obey Poisson statistics, which means that no matter when I arrive at the stop the average waiting time before the bus arrives is $1/\mu$.

In what follows I will present a few results without much derivation. If you are interested in the nitty-gritty, contact me for details.

Suppose there are two buses A and B that arrive at a stop with rates $\mu_A$ and $\mu_B.$ The probability that A arrives before B is

$P(A\mathrm{\ before \ } B) = \displaystyle{\frac{\mu_A}{\mu_A + \mu_B}}.$

The mean waiting time for bus A provided that A has arrived first is

$t_A =\displaystyle{\frac{\mu_A}{(\mu_A + \mu_B)^2}}.$

Now if the travel times to destination on buses A and B are $\tau_A \geq \tau_B,$ we can compute the expected travel time if the traveler boards the first bus that comes to the stop. We will call it $T_0$ because the strategy is to let zero buses pass (even if they take longer).

$T_0 = \displaystyle{\frac{\mu_A \tau_A}{\mu_A + \mu_B} + \frac{\mu_A}{(\mu_A + \mu_B)^2} + \frac{\mu_B \tau_B}{\mu_A + \mu_B)} + \frac{\mu_B}{(\mu_A + \mu_B)^2} = \frac{1 + \mu_A \tau_A + \mu_B \tau_B}{\mu_A + \mu_B}}.$

We can interpret this formula as follows. The total bus arrival rate is $\mu_A + \mu_B$ and therefore the mean waiting time for a bus, any bus is $1/(\mu_A + \mu_B).$ Then with probability $\mu_A/(\mu_A + \mu_B)$ the A bus has arrived and the travel time is $\tau_A.$ Likewise, with probability $\mu_B/(\mu_A + \mu_B)$ the B bus arrives so that the travel time is $\tau_B.$

It is a only marginally trickier to derive the mean trip duration (will call it $T_1$) when we are willing to let one A bus pass by in the hopes that the next bus will be the faster B bus. The answer is

$T_1 = \displaystyle{\frac{1}{\mu_A + \mu_B} + \frac{\mu_A T_0}{\mu_A + \mu_B} + \frac{\mu_B \tau_B}{\mu_A + \mu_B}}.$

The explanation of the second term in the above formula is that if A arrives first, we let it pass and we are back to the “let zero buses pass” strategy. The rest of the terms in the equation for $T_1$ are the same as before.

In general, for any $n \geq 1$ we have a recursion relation:

$T_{n+1} = \displaystyle{\frac{1}{\mu_A + \mu_B} + \frac{\mu_A T_n}{\mu_A + \mu_B} + \frac{\mu_B \tau_B}{\mu_A + \mu_B}}.$

We can now start asking questions like: “Under what conditions does letting the slow bus pass make sense (i.e. results in a shorter expected trip)?” What about letting two buses pass? When does that strategy pay off?

When does $T_1 \leq T_0$? Comparing the formulas above we arrive at a simple condition on the arrival rate of the fast bus which is independent of the arrival rate of the slow bus

$\displaystyle{\mu_B \geq \frac{1}{\tau_A - \tau_B}} \quad \mathrm{(1)}.$

For example, if the slow bus takes 30 minutes and the fast bus takes 20 minutes to arrive at the destination, it makes sense to let the slow bus pass if the fast bus arrives more frequently than once in 10 minutes. No big surprise there, anybody with a modicum of common sense could tell you that.

What is surprising is that the condition (1) does not depend on the arrival rate of the slow bus. Did I make a mistake? It turns out that when $T_1 = T_0,$ the expected travel times for other strategies are exactly the same! I will leave the proof to my esteemed reader as homework :)

Therefore, since it does not matter how frequently the slow bus comes, if the fast bus comes frequently enough (condition (1) is satisfied), it makes sense to wait for the fast bus no matter how many slow buses pass.

## Speed bested by agility

Posted in Dynamics, Game theory by Alexander Lobkovsky Meitiv on November 14, 2010

The rabbit relies on superior maneuverability to escape predators.

Next time you see a squirrel fleeing from a cat, notice it’s escape strategy. Instead of bolting straight for the safety of a tree, the squirrel follows a zig-zag trajectory with sharp, frequent turns punctuated by its thrashing tail. The pursuing cat, unable to execute the sharp turns, overshoots and looses ground as it adjusts its trajectory.

The flight behavior of a prey is instinctual and highly sophisticated–after all, predation is a major selective pressure in a Darwinian universe. The variation of escape strategies reflects the intrinsic abilities of the prey. Some prey, like the antelope, rely on superior speed and safety in numbers. Others, like the rabbit, or the squirrel, rely on superior maneuverability. In this post I will illustrate how superior maneuverability can be used to escape from a faster predator.

What is maneuverability? We will define it in a narrow sense as the ability to quickly change the course or direction of motion. The rate of change of the direction of motion is also known as acceleration. The word acceleration is used colloquially when the rate of change of velocity is in the same direction as velocity. More generally, the vector of acceleration could point in any direction. If the acceleration vector points in the direction opposite to velocity, we would say that the moving object decelerates. Circular motion is sustained by a centripetal acceleration which points toward the center of the circle, perpendicular to the velocity vector.

To illustrate how a slower animal can successfully evade a faster one let us consider a simple model of the predator-prey pursuit. Suppose, the only constraint on the motion is the velocity and acceleration caps. The predator’s maximum velocity is greater than that of the prey. Vice versa, the prey can accelerate faster (in any direction) which means, among other things, that it can make sharper turns.

Here are the model pursuit strategies:

Predator:

• If traveling slower than the maximum speed, accelerate in the instantaneous direction of the prey at maximum acceleration.
• When traveling at maximum velocity, project the acceleration vector on the direction perpendicular to the instantaneous velocity. This will insure that speed does not exceed the maximum.

Prey:

• If traveling slower than the maximum speed, accelerate away from the predator at maximum acceleration.
• If traveling at maximum velocity and the predator is a certain distance D away, stay the course.
• If traveling at maximum speed and the predator is within striking distance, execute a turn away from the predator at the tightest turning radius possible.

Even without doing the simulations of this model we can foresee the qualitative features of the trajectories it yields. When the prey is further than D away from the predator, it will run along a straight trajectory which means that the speedier predator will eventually catch up with it and draw within the distance of caution D. At that point the prey will commence a sharp turn away from the predator. The predator, being less agile, will not be able to turn as sharply and will overshoot the prey and the distance between them will grow and might exceed D. When that happens, the prey will stop turning and run along a straight line again. The cycle will repeat ad infinitum the predator not being able to get closer to the prey than some finite fraction of D.

The movie below shows the trajectories of the prey (green) and predator (red) produced by the simple model when the predator is 50% faster, but the prey is able to achieve twice the acceleration.

Finally, let me point out that the strategies in the simple model are far from optimal. For example, one can imagine that if the predator could anticipate the direction of the prey’s turn (which is possible in the above scenario in which the prey always turns away from the predator), it could potentially intercept the prey. The optimality of a particular escape — pursuit strategies is usually hard to prove and the methods of such proofs are still subject of current research.

## What is the mpg of a bicycle?

Posted in Transportation by Alexander Lobkovsky Meitiv on November 10, 2010

The next green thing?

It seems a silly question at first. Digging a little deeper, it is easy to convince yourself that when you travel somewhere by bike, your body burns more calories than if you sat in your office chair. The extra calories came from the extra food you had to eat. The land that was used to produce the food you had to eat could have been used to grow corn and make ethanol. The amount of ethanol produced from the corn required to travel a mile by bike is undoubtedly small, but how small?

To compute my bike mpg we will need three numbers:

1. Extra calories burned per mile of bike travel at roughly 13 miles per hour (my average commuting speed). For relatively flat terrain and my weight this number is roughly 42 food calories per mile (obtained from about.com).
2. We need a food equivalent for the ethanol production. Let’s say I go my extra calories from eating sweet corn. According to the same source, sweet corn has 857 food calories per kilogram. So I will need to eat 49 grams of sweet corn per mile traveled at 13 miles per hour on my bike.
3. Now we need to know how much ethanol can be made from 49 grams of sweet corn. The Department of Energy’s Biomass Program to the rescue. According to their website, a metric ton of dry corn can theoretically yield 124.4 gallons of ethanol. Since sweet corn is 77% water, this means that up to 0.0014 gallons of ethanol can be made from 49 grams of sweet corn.

Putting these numbers together we arrive at 0.0014 gallons of ethanol per mile or…drumroll please:

## 701 mpg

This number is not small, but neither is it very large! There exist experimental vehicles that seat four and achieve over 100 mpg. When fully loaded, the effective, per passenger mpg is 400. If my calculations are correct, Technology is about to bring motorized transport close to the efficiency of a person on a bike!