Thursday, August 30, 2018

Price Inflation



The Happening

Riding high on top of the world, it happened.
Suddenly it just happened,
I saw my dreams torn apart

Something happened to prices during my lifetime. Rather than just being a cranky old man and complain about it, can examining the data suggest what happened?

Can you imagine a 5 cent candy bar? If you were born in 1951 like I was, you probably not only can imagine it, you can remember it. Before I entered college in 1969, prices were fairly stable. My first new car cost only $3,000. Something has happened to prices since that time. Looking at the data might help understand when, and why, something happened to prices.

The U.S. Bureau of Labor Statistics reports the Consumer Price Index (CPI) , which is often used to track inflation, from 1913 to the present. It is often indexed to a specific year. When the annual CPIs (with an index of 100.0 in 1984) are plotted, they take on a distinctive shape as shown below.





Doing a non-linear regression on that data, produces an equation whose values have a correlation of 0.9926 with the reported CPIs. The non-linear equation is essentially two straight lines with a transition between those lines somewhere between 1969 and 1975. Looking for a historical event that happened during that time period, that might have affected the CPI, and has remained in place since that time, suggests the Nixon Shock. In 1971, President Nixon ordered that the US Dollar, which was then the international reserve currency, no longer be convertible into gold. It had been not been convertible into gold for US Citizens since the 1930s, and this action seemed to primarily affect foreign governments, but that action remains in effect today.







If this was indeed the cause of the transition, this suggests that perhaps there are probably two transitions. One at the time of Bretton Woods Conference in 1944 when the US dollar (convertible into gold) was first made the international reserve currency. And then in 1971 when the US dollar remained as the international reserve currency, as it is today, but was no longer convertible into gold.





Fitting straight lines to the CPIs in each time period produces:

• A period before Bretton Woods, with virtually no CPI inflation
(an increase of 0.072 1984 CPI basis points per year. )

• A period between Bretton Woods and the Nixon Shock, where CPI inflation was modest
(an increase of 0.645 1984 CPI basis points per year) and

• A period after the Nixon shock, where CPI inflation was large
(an increase of 4.597 1984 CPI basis points year).

Using these three straight lines, you can compute values that have a correlation to the reported CPIs of 0.99931.

If you only look only at year to year inflation, which dropped from 11.0% in 1974 to 2.1% in 2017, you miss this underlying long term impact. The impact on inflation when a domestic currency is also used as the international reserve currency is known as the Triffin dilemma. https://en.wikipedia.org/wiki/Triffin_dilemma. An analysis of the reported CPIs, suggests that the dollar being the international reserve currency, especially when the dollar was no longer convertible into gold, has had a measurable effect, not just on the international economy, but on our daily lives.


Tuesday, August 21, 2018

The Difference between Means and Medians



Wonderful World

Don't know much about algebra,
Don't know what a slide rule is for

If you don’t understand math, then you may get talked into supporting some decisions that are not in your own best interest.

Teen Talk Barbie is right “Math is Hard”. But that doesn’t mean that you shouldn’t try to understand math. If you don’t understand some basic concepts of math, then you can get some unexpected results. One of those is that increasing the average, also known as the mean, does not make things better for the typical person. The average, mean, household income is total income divided by total households. The median household income, the income of the middle, is the income at which 50% of the households have incomes that are higher, and 50% of the households have incomes that are lower. When the median and the mean ( as well as another statistic called the mode) are the same, the name of that is a normal distribution. When their difference becomes greater, should that be called abnormal? To illustrate this, consider the Town of Duckburg, home of Uncle Scrooge McDuck.

In the town of Duckburg, 1,000 households have an income of $50,000 per year while Scrooge McDuck has an income of $5 million per year. The mean, average, income of all households is almost $54,945 per year, while the median, 50th percentile, income of the households is $50,000. The town is going to receive new income of $10 million per year but they have to decide how to divide this new income among the households in the town.

Scrooge says that his income is over 9100% of the mean income, so he should get most of that new income. However, he says that he wants to be generous, and suggests that he should only get 50% of that new income and the other $5 million should be shared among the rest of the households. This increases Scrooge McDuck’s income from $5 million to $10 million per year, while the other duck households increase from $50,000 to $55,000 per year. However, while the mean income increased by almost $10,000 to $64,935 , the median income only increased by only $5,000 to $55,000. The problem is that while Scrooge’s income was over 9100% of the mean income, it was only 9.9% of the total income of all households in Duckburg. To keep the income distribution the same, he and every other household should each only get a 9.9% increase in income. Not a 100% increase for Scrooge and a 10% increase for all other households. Since total income increases by 9.9%, if every household’s income had gotten an increase of 9.9%, including Scrooge’s, then the gap between mean and median income would not have increased.

Might giving most of the new income to Scrooge McDuck have been a good idea? Would he be more likely than most households to use that income to increase the economy, as supply side economics believes, where you accept becoming less equitable but possibly more productive? Maybe, but fans of Uncle Scrooge know the most likely outcome would be that Scrooge would only increase the amount of money in his vault, in which he will swim.



Saturday, June 3, 2017

Truck Classification


The Blind Men and the Elephant


And so these men of Indostan, Disputed loud and long, Each in his own opinion, Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong!

There is no single classification system that can be developed for trucks.

The poem the Blind Men and the Elephant is a humorous warning that it is not possible to establish an absolute truth based on limited observations.  The blind men in the poem base their understanding of the nature of an elephant on the things that they can actually observe, act like theirs is the only important observation, and then use that observation to get the nature of the elephant completely wrong.  
A truck is just as complex as an elephant.  If a vehicle is to be classified as a truck, and that truck was to be further classified into various types of trucks, the observations become important.  
Do you classify a truck based:
  • On the weight of the truck?  And if weight, is it the weight at the time of the observation? Or is the maximum weight that can be legally transported?  Or is it the average weight per axle?
  • On the number of axles and tires of the truck?
  • On the body type of the truck?
  • On the length of the truck?
  • On the commercial markings on the side of the truck and/or its trailer?
  • On the purpose of that truck’s trip?
  • On the power of the engine in the truck?
  • On the type of fuel powering the truck?
  • On the contents of the cargo area of the truck? If the cargo contents, how is the cargo to be classified?  
These are not just idle questions.  Each of these observations has been made, and the use of them leads to different and potentially incompatible classification systems. Weigh in Motion (WIM) stations observe the weight of the truck at a moment in time that it passes through that WIM Station. Departments/Registries of Motor Vehicles (DMV/RMV) report the Gross Vehicle Weight (GVW), the maximum weight of the vehicle, cargo and passengers as specified by the manufacturer of the truck. Pavement engineers are concerned with weight per axle of various types of trucks.  FHWA in its Traffic Monitoring Guide (TMG) outlines a truck classification system based on the number of axles, tires and the general body type.  Some state DOTs classify trucks based on the length of the truck and its trailers.  Video or visual observations often classify a truck based on the marking on the side of the truck.  Commercial Vehicle surveys might be the only observations of the truck purpose, and then only for the sample of trucks that are surveyed.   
This can lead to classification systems that are incompatible.  Both of the trucks shown below have a body type of beverage trucks.  But the truck on the left would be classified by the TMG as a Class 5 (Single Unit with 2 axles and 6 tires), while the truck on the left would be a Class 8 (Combination Unit, one trailer; three axles in total).  And both trucks will have a different weight per axle depending on whether they are loaded or empty.
          

And this does not even get at the issue of whether ANY vehicles with a GVW less than 10,000 lbs should even be called a truck.  These light trucks are the subject of complex tariff systems.
(see http://www.npr.org/sections/money/2015/06/12/414029929/episode-632-the-chicken-tax,) The National Highway Traffic Safety Administration, the Federal Motor Carrier Safety Administration and the Federal Highway Administration may be too “chicken” to call light "trucks"  trucks.
To paraphrase Mark Twain, there is only One True Truck Classification System.....in fact there are several of them.  A truck classification system that focuses on fuel type won’t serve the needs of pavement engineers.  Nor is a fuel based classification system likely to collect the axle weight data that would make it possible to develop crosswalks between those systems.  It is probably unreasonable to expect to find a truck classification system that serves all needs.  Just try to find ones that are useful to you.

Saturday, May 6, 2017

Traffic Congestion

Crosstown Traffic

All you do is slow me down
And I got better things on the other side of town

How is capacity determined in traffic analysis?

Congestion will slow you down.  The travel time in congestion is defined as a function of the capacity of a road.  The closer that the demand, volume, on a link comes to that capacity, the worse the congestion. However the capacity that is used in those calculation is often misunderstood.  If it were better understood perhaps no one would ask questions like “How can you even have a Volume to Capacity ratio that is greater than 1.0?”
Traffic flow theory says that traffic moves like a compressible fluid.  While my auto insurer of course doesn’t want me to compress my car, the compression as it is being used here, and by extension the capacity of that compressible traffic, refers not to the physical bumper to bumper capacity.  It refers to the operating capacity of the car.  And that operating capacity incudes not only the car itself, but also the spacing to the car ahead.  It is that spacing between cars that is compressible, not the car itself.
In a compressible fluid, the flow is expressed as the product of the speed and the density.  Because flow is typically expressed as cars per hour, and speed is typically expressed as Miles per Hour, this means density would have to be expressed in cars per mile.  The space consumed by an auto would be the bumper to bumper car length plus the spacing to the car in front, feet per car, or converting the units, miles per car. It should then be apparent that density of traffic as a fluid has to be the inverse of the operating length of a car, i.e. the bumper to bumper length plus spacing.
When I took Driver’s Ed, the rule of thumb for the safe spacing to the next car was a function of driving speed.  That rule of thumb from the safe following distance to the next car was one car length for each 10 MPH of your speed.  This has since changed to a 2 or 3 second gap, which says that the spacing between cars should be the length to travel those seconds at the speed of the car.  In any event both the old time rule of thumb and the newer gap time rules, make the operating length a function of speed.  If the bumper to bumper car length is 20 feet, and the operating speed is 70 MPH, then the space “occupied” by the car is 20 feet plus 7 car lengths, 70 MPH/10, or 160 feet.  The density would thus be 1 car/160 feet, or 33 cars per mile.  This means that the flow at 70 MPH and a density of 33 cars per mile would be 2310 cars per hour. This is just about the standard capacity in passenger cars for a freeway with a design speed of 70 MPH.
For the 2 or 3 second following rule you would get much lower capacities, but the Maximum Flow Rate its actually a transition between laminar and turbulent flow of fluid, which is way beyond what I wanted to discuss here.  So when the LOS is F and the Volume to Capacity, v/c, ratio is greater than 1.0, the spacing between cars is less than the one car length per 10 MPH rule.  Safety suffers, but this v/c ratio is still physically possible. The maximum density is of course one car length, but since that occurs at 0 MPH, the flow rate is zero anyway.

Friday, April 21, 2017

Most Probable Travel Trip Table

The Gambler

You've got to know when to hold 'em
Know when to fold 'em

Travel Demand Models Forecast the “Most Probable” Trip Table,
They Don’t Forecast the Only Possible Trip Table

Because the trip tables that are produced by travel demand models have precise values, this is often confused with being the only answer.  When the US Census says that an average household has 2.58 people this does not mean that this is the only possible answer.  This precise value does not mean that it is the only value that you can find for a household.  In fact, you can be certain that you will find absolutely no households with exactly 2.58 people, unless for some reason you are counting children as fractions of adults.  But if you are planning for the infrastructure needed to serve households, this average value is more useful than having no information at all.
For the same reason Las Vegas casinos want to know the average outcome of a game of chance in order to set the odds.  They don’t expect to know the outcome of each single game of chance, they want to know the probability of the outcomes in the long term so they can set the odds. They make money by knowing the average outcomes, including the most probable outcome, not the outcome of each game.
Trip tables that are computed in travel demand models are also the most probable outcomes. Whether it is a table produced by Iterative Proportional Fitting (IPF)/Frataring; Gravity Model Trip Distribution; Logit Mode Choice; etc., what may appear to be a single precise trip table is in reality only the most probable trip table.  And just like the Las Vegas casinos use probability and statistics to come up with the probable outcomes of games, not by testing an infinitely large number of games, the choice models in travel demand models are also derived from probability and statistics. A difference is that every possible outcome in casino game is known.  The odds can be quoted because for example you know that no matter how many times you roll a pair of six sided dice, that there are only 36 possible combinations that will appear.  

When applying statistics in choice models, the most probable outcome is computed, without having to know every possible outcome.   To use the technical terms, the most probable mesostate (i.e. outcome, for example, in “craps” rolling a seven) is selected without having to know how many different microstates (i.e. ways to reach that outcome, for example in a game of “craps”, there are six ways to roll a seven) that there are in that mesostate.  Given that the number of cells in a trip table is seldom known before a transportation study is prepared, and you have to know that number to even compute the possible mesostates (i.e. outcomes),  that is perhaps the best that can be expected. Travel demand forecasts will be the most probable outcome, but they are not, and never could be, a guaranteed outcome.  But Las Vegas seems to do all right for itself by only knowing the most probable outcomes, and it can be expected that those using the trip tables that are output from travel demand models will have the same luck.

Saturday, April 8, 2017

Truck Platooning

Convoy

We gonna roll this truckin' convoy
'Cross the U-S-A.

How should Travel Demand Models be modified to address Truck Platoons?

Truck platoons are being discussed as a new and innovative way to reduce fuel consumption, reduce air pollution and CO2 emissions and to increase highway capacity.  Given these promises it is natural to try to accommodate trucking platoons in Travel Demand Models that forecast traffic volumes in response to changes in capacity.  However the idea behind truck platoons is not new. C. W. McCall’s classic song “Convoy” was released way back in 1975.  What is new is the technology that might make truck platoons, i.e. convoys, operate safely.
It might be useful to first discuss the concept of platooning.  In NASCAR racing the concept is called drafting.  Drafting is based on proven aerodynamics.  Closely spaced vehicles traveling in a platoon will consume less energy than a vehicle traveling alone.  This is not only true for vehicles, it is also why geese fly in a closely spaced V formation, where the spacing between geese conserves energy. However while the overall energy that is consumed is lower, those benefits are not evenly distributed.  The lead goose, achieves lower savings than all other geese, in the formation. Similarly the lead vehicle in platooning will also achieve lower savings.

source: http://nascarnation.us/page/nascar-sprint-cup-draft-and-aero-explained
According to Wikipedia, on the show Mythbusters drafting behind an 18-wheeler truck was tested and results showed that traveling 100 feet behind the truck increased overall mpg efficiency by 11%. Traveling 10 feet behind the truck produced a 39% gain in efficiency. Additionally, on the same episode, Mythbusters demonstrated that it can be very dangerous for a following vehicle if one of the truck's tires delaminates.  Then chunks of ejected rubber can be large enough to cause serious harm, even death, to a driver following too closely.
In addition, if the vehicle in front stops suddenly, there is also little time to react and stop safely. Truck platooning is proposed as a way to use technology to reduce the safe following gap between trucks. This was discussed during the March 2017 session of FHWA’s’ Talking Freight webinar series.  The recommended safe gap between trucks without platooning is 7 to 8 seconds.  At 60 MPH, the distance associated with this time gap would be approximately 600 to 700 feet.  A normal passenger vehicle such as a car will normally take approximately 320 feet to come to a complete stop after recognizing the need to stop. In comparison, a truck and trailer takes about 525 feet before it comes to a complete stop after recognizing the need to stop. A 700 foot gap should accommodate both the reaction time and a safe stopping distance for trucks.
During that Talking Freight webinar, research was presented for trucks operating in platoons.  It was suggested that for two truck platoons, fuel savings of 10% on rear truck and 4.5% on front truck could be achieved at gaps of 40 feet.  These, gaps which only could be safely achieved with the proposed new technology, which includes vehicle to vehicle (V2V) communication by trucks in the platoon.
As currently envisioned, trucks with the suitable technology would operate as platoons on a completely voluntary basis.  If weather conditions were not favorable, platooning would not be offered.  It would only be offered on controlled access highways operating at speeds in excess of 50 MPH.  If these conditions were not present, then platooning would not be offered and standard gaps and capacity would be expected.
The assignment modules in Travel Demand Models will consider costs and times that vary as the volume to capacity ratio changes during assignment iterations.  However equilibrium after a number of iterations is not guaranteed if capacity on road links changes between iterations, for example in response to such things such as platooning.  If truck platooning provides no advantages for trucks traveling at slow speeds on uncontrolled, or partially controlled, access highways, then there is no reason to ever change the capacity on those road links.  Even on controlled access links, there is no reason to introduce capacities that would vary because of truck platooning.  The volume delay function in travel demand models, typically produces no changes to cost or times at volume to capacity ratios that are less than 0.7.  When speeds are greater than 50 MPH, those volume to capacity ratios would never be exceeded. Thus at speeds where truck platooning might have an impact in travel demand models, is precisely where truck platooning will not be offered.
Also trucks would always want to be something other than the lead truck in order to achieve the higher fuel savings.  In order to be widely adopted, the technology will need to not only to handle shorter gaps but assure an equitable share of positions within the platoons.  However there is no reason to believe that truck platooning would be offered on links where it could actually impact the capacities that are currently used in travel demand models.  Where it will be offered, it would have no impact.  Rather than including capacities that would vary with speed, which would violate a basic premise of the assignment modules, it would thus appear that there is no reason to modify travel demand models to include truck platooning.

Friday, April 7, 2017

Crashes Increase with Safer Cars

God Bless the Child Them that's got shall have Them that's not shall lose Why the highway fatality rate might increase when new safety equipment is added to some cars.
Technology is being sold in cars today that makes those cars less likely to be involved in crashes.  According to an analysis by Consumer Reports, this technology includes:

  • Automatic emergency braking (AEB) - Brakes are automatically applied to prevent a collision or reduce collision speed.
  • Forward-collision warning (FCW) - Visual and/or audible warning intended alert the driver and prevent a collision.
  • Blind-spot warning (BSW) - Visual and/or audible notification of vehicle in blind spot. The system may provide an additional warning if you use your turn signal when there is a car next to you in another lane.
  • Rear cross-traffic warning - Visual, audible, or haptic notification of object or vehicle out of rear camera range, but could be moving into it.
  • Rear Automatic Emergency Braking (Rear AEB) - Brakes are automatically applied to prevent backing into something behind the vehicle. This could be triggered by the rear cross-traffic system, or other sensors on the vehicle.
  • Lane-departure warning (LDW) - Visual, audible, or haptic warning to alert the driver when they are crossing lane markings.
  • Lane-keeping assist (LKA) - Automatic corrective steering input or braking provided by the vehicle when crossing lane markings.
  • Lane Centering Assist - Continuous active steering to stay in between lanes (active steer, autosteer, etc.)
  • Adaptive Cruise Control - Adaptive cruise uses lasers, radar, cameras, or a combination of these systems to keep a constant distance between you and the car ahead, automatically maintaining a safe following distance. If highway traffic slows, some systems will bring the car to a complete stop and automatically come back to speed when traffic gets going again, allowing the driver to do little more than pay attention and steer.
If these technologies were adopted, it would certainly be reasonable to expect the overall fatality rate to decrease.  However, the National Highway Traffic Safety Administration currently is reporting an increase in the highway fatality rate.  How can cars be safer.... and yet the fatality rate be increasing? The problem may be because the crash rate is computed for all vehicles and the safety equipment is only in newer cars.  If the crash rate went down for newer cars with safety equipment AND the crash rate was unchanged for all other vehicles, then the overall average crash rate would indeed decrease. However, if the crash rate for new cars went down, but the crash rate for all other vehicles increased, even by a small amount, then the overall crash rate might go up until the market share of cars with this newer equipment was larger.
So how could the crash rate for vehicles that are NOT equipped with newer safety technology go up?  This is because the combined crash rate is determined not only by the vehicle, but also by the driver.  When no vehicles have improved safety technology, then all drivers behave the same way. However if some vehicles have improved safety equipment and others do not, this changes the dynamic of defensive driving behavior by the drivers.  
All drivers have to make judgments about the behavior of other drivers.  That is the definition of defensive driving.  If no safety technology is available, when a driver is being tailgated, that driver could correctly assume that the tailgater was driving unsafely and chose to change lanes or take some other defensive driving action.  As safety equipment becomes more widespread, that same driver might assume that the tailgater has safety equipment, and that driver might chose instead not to change lanes or take some other defensive driving action.  If that driver assumed incorrectly, the crash rate of that driver might actually increase.
Currently the fatality rate is about 1.1 per 100 Million Vehicle Miles of Travel (MVMT).  Let’s assume that the fatality rate decreased by 50% for all new vehicles with safety technology.  If the fatality rate for all other vehicles did not change, then the overall crash rate would go down.  However, let’s instead assume that a decrease in defensive driving causes an increase in the fatality rate due to the driver’s actions. Properly this change due to driver behavior should be applied to all drivers, both those with and without new safety technology.  Let’s simplify the combined rate and assume that that there is a 50% reduction in the cumulative effect of more safety technology and less defensive driving.  However, the other vehicles would only experience an increase due to less defensive driving.  Let’s simplify their combined rate and assume that that there is a 5% increase in their fatality rate due to less defensive driving.  
The combined fatality rate for all vehicles depends on the percentage of vehicles equipped with new safety technology.  As shown in the table below, if only 1% of the vehicles had the new safety technology, the overall fatality rate would only decrease if there was also no change in the fatality rate for all other vehicles.  If instead, the fatality rate of other vehicles were to increase as shown, there would be a 4% increase in the fatality rate, from 1.1 to 1.15 per 100 MVMT, even without any change in the miles traveled.  While there would certainly be a decrease in the number of fatalities involving cars with new safety equipment, this doesn’t offset the increase in the number of fatalities involving all other vehicles.  With these changes in fatality rates, it is not until the share of cars with new safety equipment increases to 10% of the fleet that the overall number of fatalities would actually decrease.
This is why, in the short term, improvements in the technology that produce safer cars could result in an increase in the crash rate.  Large improvements for a few, at the expense of a small decrease for the many, might result in a worsening overall condition.  This short term change does not offset the potential long term impact.  It just suggests a need for more widespread adoption of this technology has to be achieved before overall safety could be improved.
Fatality Rate ( fatalities per 100 Million VMT)

Base Rate

1.1

Safer Cars compared to the Base
50%

Other Cars compared to the Base
105%

% of cars
with safety equipment
% of other cars with no safety equipment
Combined fatality rate
if other cars have
the same crash rate
as the base
Combined fatality rate
if other cars have
a higher crash rate
than the base
1%
99%
1.09
1.15
2%
98%
1.09
1.14
3%
97%
1.08
1.14
4%
96%
1.08
1.13
5%
95%
1.07
1.12
6%
94%
1.07
1.12
7%
93%
1.06
1.11
8%
92%
1.06
1.11
9%
91%
1.05
1.10
10%
90%
1.05
1.09