(solution) Define Markov chain models for Cases 6.3–6.8 by specifying the state space in detail and providing…

(solution) Define Markov chain models for Cases 6.3–6.8 by specifying the state space in detail and providing…

Define Markov chain models for Cases 6.3–6.8 by specifying the state space in detail and providing a one-step transition matrix using appropriate notation to stand in for the unknown probabilities. Case 6.3 A model of the movement of a taxi defines the state of the system to be the region of the city that is the destination of the current rider, and the time index to be the number of riders the taxi has transported. When the taxi delivers a rider, it stays in the destination region until it picks up another rider. The Markov property seems plausible here, because the destination of a rider does not depend on the destination of previous riders (it makes no sense for a rider to ask the driver, “Where have you been today?” and then decide where to go). Time stationarity may be violated because traffic patterns change throughout the day in most cities. Case 6.4 A model of computer keyboard use defines the state of the system to be the key that a person is currently typing, and the time index to be the number of keys typed. Applicability of the Markov property depends on the material being typed. If the material is predominantly data (numbers and symbols), then perhaps the next key typed depends only on the last key typed, if it depends on any previous keys at all. On the other hand, if the material is text (such as this book), then the requirement to spell words influences the next letter typed. For instance, consider the probability that the next letter typed is “y” when the last letter typed is “r” compared to the same probability if the previous four letters typed are known to be “Larr.” Nevertheless, a Markov chain odel of typing may be a useful approximation. Time stationarity will often be satisfied, since the sequence of keys typed should not change much throughout the material.     Case 6.5 A model of the preferences of consumers for brands of toothpaste defines the state of the system to be the brand of toothpaste the consumer currently uses, and the time index to be the number of tubes of toothpaste purchased. The Markov property implies that consumers’ choices depend only on the brand that they currently use, and not on past experience with other brands (that is, consumers have short memories or they think brand characteristics change so rapidly that past experience with a brand is not relevant). Time stationarity can apply only over a brief time period, since advertising campaigns, discounts, etc., influence consumers’ choices. Case 6.6: A model of the weather in Columbus, Ohio, defines the state of the system to be the high temperature, in whole degrees, and the time index to be number of days. Since weather patterns move quickly through Columbus, today’s temperature may be a good indicator of tomorrow’s temperature, while the value of information about previous days may be negligible; therefore the Markov property is plausible. But seasonal fluctuations in temperature make the time-stationarity property a poor approximation over any period of time spanning more than one season. Case 6.7: A model of an industrial robot defines the state of the system to be the task that the robot is performing, and the time index to be the number of tasks performed. The robot works on two different kinds of assemblies, each with its own distinct collection of tasks, and it requires a different tool for each kind of assembly. Changing the tool is one of the robot’s tasks. The Markov property may be reasonable if the different kinds of assemblies the robot encounters are randomly mixed. However, if each kind of assembly is produced in a batch of, say, 50 assemblies, then the robot will change tools every fiftieth assembly and the next state of the system will depend on more than just the current state; it also depends on how many assemblies of a particular kind have been produced. Time stationarity may be plausible if the mix of different assemblies does not change over time. Periodicity may arise because the tool-change state can only be entered after completing an assembly. Case 6.8 A model of an auto insurance policy defines the state of the system to be whether or not the policyholder had an accident last year, and the time index to be the number of years. The insurance company believes that after having an accident, a policyholder is a more careful driver for the next 2 years. The validity of the Markov property depends on how we define the state space. Since a policyholder is more careful for 2 years following an accident, the Markov property does not apply if the state space is simply whether or not the policyholder had an accident this year–M = {1,2} ≡ {accident, no accident}. However, if we expand the state space to be the accident history for the last 2 years–M = {1, 2, 3, 4} ≡ {(accident, accident), (accident, no accident), (no accident, accident), (no accident, no accident)}–then the Markov property may apply. Clearly insurance companies do not believe that time stationarity holds generally since they charge premiums based on the age of the policyholder. The Markov and time-stationarity properties are always approximations and therefore better for some modeling-and-analysis problems than others. These examples demonstrate two modeling principles: Conformance to the Markov property can sometimes be made better by redefining the state space of the process; and time stationarity is usually more appropriate over a restricted time period than over all time.