Tag Archives: scheduling

Improving Traveling Umpire Solutions the Miami Heat Way: Not one, not two, not three…

Those who know me are aware of my strong passion for basketball, so I had to find a way to relate this post to my favorite sport. Fans of basketball in general, and of the Miami Heat in particular, might be familiar with this video clip in which LeBron James makes a bold prediction. Back in 2010, when asked how many titles the Heat’s big three would win together, he replies “Not one, not two, not three, not four, not five, not six, not seven, …” While I’d love to see them win 8 titles, it sounds a bit (a lot) unlikely. But I can’t complain about their record so far. Winning 2 titles in 3 finals’ appearances isn’t bad at all. But what does this have to do with baseball umpires? Let’s get back to OR for a moment.

A couple of years ago, I wrote a post about scheduling baseball umpires. In that same article I co-authored with Hakan Yildiz and Michael Trick, we talked about a problem called the Traveling Umpire Problem (TUP), which doesn’t include all the details from the real problem faced by MLB but captures the most important features that make the problem difficult. Here’s a short description (detailed description here):

Given a double round-robin tournament with 2N teams, the traveling umpire problem consists of determining which games will be handled by each one of N umpire crews during the tournament. The objective is to minimize the total distance traveled by the umpires, while respecting constraints that include visiting every team at home, and not seeing a team or venue too often.

And when I say difficult, let me tell you something, it’s really hard to solve. For example, there are 16-team instances (only 8 umpires) for which no feasible solution is known.

Two of my Brazilian colleagues, Lucas de Oliveira and Cid de Souza, got interested in the TUP and asked me to join them in an effort to try to improve the quality of some of the best-known solutions in the TUP benchmark. There are 25 instances in the benchmark for which we know a feasible solution (upper bound) and a lower bound, but not the optimal value. Today, we’re very happy to report that we managed to improve the quality of many of those feasible solutions. How many, you ask? I’ll let LeBron James himself answer that question:

“Not one, not two, not three, … not ten, … not eighteen, … not twenty-three, but 24 out of 25.”

OK, LeBron got a bit carried away there. And he forgot to say we improved 25 out of the 25 best-known lower bounds too. This means those pesky optimal solutions are now sandwiched between numbers much closer to each other.

Here’s the approach we took. First, we strengthened a known optimization model for the TUP, making it capable of producing better bounds and better solutions in less time. Then, we used this stronger model to implement a relax-and-fix heuristic. It works as follows. Waiting for the optimization model to find the optimal solution would take forever because there are too many binary decision variables (they tell you which venues each umpire visits in each round of the tournament). At first, we require that only the decisions in round 1 of the tournament be binary (i.e. which games the umpires will be assigned to in round 1) and solve the problem. This solves pretty fast, but allows for umpires to be figuratively cut into pieces and spread over multiple venues in later rounds. Not a problem. That’s the beauty of math models: we test crazy ideas on a computer and don’t slice people in real life. We fix those round-1 decisions, require that only round-2 variables be binary, and solve again. This process gets repeated until the last round. In the end, we are not guaranteed to find the very best solution, but we typically find a pretty good one.

Some possible variations of the above would be to work with two (or more) rounds of binary variables at a time, start from the middle or from the end of the tournament, etc. If you’re interested in more details, our paper can be downloaded here. Our best solutions and lower bounds appear in Table 10 on page 22.

We had a lot of fun working on the TUP, and we hope these new results can help get more people excited about working on this very challenging problem.

Advertisements

Leave a comment

Filed under Applications, Heuristics, Integer Programming, Research, Sports, Traveling Umpire Problem

Installing iPhone Apps: Apple Doesn’t Care About Average Completion Time

Having recently spent some time outside the US, I found, upon my return, that many of the Apps on my iPhone needed to be updated. No big deal: I clicked “Update all”, typed in my password, and let the operating system finish the job. After watching the update process for a minute or so, I noticed one interesting fact: Apps were updated (i.e. downloaded+installed) in the order the updates became available (chronologically increasing), rather than by increasing order of the size of the update (in bytes). If you’re asking “why does it matter?”, read on.

During my grad school years at Carnegie Mellon, I had the pleasure of taking the Sequencing and Scheduling class with Egon Balas (I later sat in the class again as his TA, and finally taught it to the MBA students in 2004). So let’s start with some terminology: given a machine (the iPhone) and a set of tasks (Apps) to be executed (updated) on the machine, the completion time of a task is the time at which it is finished. For example, let’s assume the Concorde TSP App needs an update. If time zero is the moment I enter my iTunes password, the completion time of task “Concorde TSP” is the earliest time at which this App’s latest version is ready to run on my iPhone. The total completion time of a set of tasks is simply the sum of the completion times of all tasks, and the makespan is the completion time of the task that gets updated last.

The App update process has no release dates (all outdated Apps are ready to be updated at time zero), is non-preemptive (once started, the update of an App continues until it’s finished; ignoring crashes and other issues), and doesn’t involve sequence-dependent setup times (as soon as an App finishes updating, the next App in line can start its update right away). Under these circumstances, the makespan of a group of outdated Apps is always the same, regardless of the order in which they get updated. For example, if App A takes 15 seconds to download and install, and App B takes 10 seconds to download and install, it will take me 25 seconds to update both of them, regardless of which one is updated first. So far, so good. But let’s see what happens with the total (or average) completion time.

Continuing with the two-App example above, if I update the Apps in the order A, B, the completion time of A is 15, and the completion time of B is 25. The total completion time is 15+25=40, and the average completion time is 40/2=20 seconds. If the Apps are updated in the reverse order B, A, the completion time of B is 10, and the completion time of A is 25. The total completion time now is 10+25=35, and the average completion time decreases to 35/2=17.5 seconds. If there were other Apps being updated before or after the pair A, B, swapping them to make sure that B goes before A (because B takes less time) would have the same effect (the A, B swap doesn’t change the completion time of other Apps). What I just explained is called an exchange argument. It proves that whenever two Apps are out of order (in the sense that a smaller one is placed after a larger one), swapping them reduces the total/average completion time. Therefore, the minimum total/average completion time is obtained when the Apps are sorted by increasing order of duration. In the scheduling literature, this is called the SPT rule (Shortest Processing Time first).

I still haven’t answered the question of whether all of this matters because the makespan doesn’t depend on the order of Apps (updating A and B always takes 25 seconds). The answer is I don’t know! It’s a psychological effect. Shorter completion times may give the user the impression that the update process is going fast because a bunch of small Apps get updated quickly. By updating larger Apps first, the user may have the impression that the process is taking longer because, after a while, there are still many Apps to go. Should Apple worry about this? I’ll leave that question to my colleagues in the Marketing department who specialize in Consumer Behavior. If the answer turns out to be “yes”, then you now know what to do.

P.S. I’d like to know what other mobile operating systems do. Do they use SPT? Please let me know in the comments.

5 Comments

Filed under Applications, iPhone

The Joy of Baking (Optimally)

‘Tis the season of baking all kinds of things: cookies, cakes, breads, brownies, pies, and my favorite Brazilian dessert “pudim de leite moça“, which is depicted below. Click here for the step-by-step recipe.

Many OR bloggers, such as Laura McLay and Anna Nagurney, actually enjoy baking, and they both have written posts on the subject (e.g. here and here). I happen to include myself in this group and, yes, I made the pudim shown above (using  my mom’s recipe).

My goal today is to approach the art of baking from an optimization point of view. Let’s say you have a long list of items to bake. Perhaps you’re hosting a mega party at your house, or you’re helping your local church or favorite charity with their holiday cooking. You have an oven that can only fit so much at a time (think of area, or volume). Each item to be baked occupies some space in the oven and needs to bake for a specific amount of time. In what order should you bake your items so that you finish as soon as possible? (Side note: it may not be obvious at first sight, but this is the same problem faced by a container port that needs to decide the order in which to unload cargo ships.)

In the OR world, this is a job scheduling problem with a cumulative machine. The jobs are the tasks to be performed (items to bake), the machine (or resource) is the oven. We say the oven is cumulative, as opposed to disjunctive, because it can deal with (bake) multiple items at a time. The unknowns in this optimization problem are the start times of each job (when to begin baking each item). The objective is to minimize the makespan, which is defined as the finish time of the last job (the time at which it’s OK to turn off the oven). Finally, this is a non-preemptive problem because, typically, once you start baking something, it stays in the oven until it’s done.

This problem occurs so often in practice that the Constraint Programming (CP) community created a global constraint to represent it. It’s called the cumulative constraint (what a surprise!). Here’s a reference. For example, let’s say that we have a 10-cubic-foot (cf) oven and we need to bake five items. The baking times (in minutes) are 20, 25, 40, 30, and 30. The space requirements in cf are, respectively, 6, 4, 5, 6, 4. If the time at which we begin baking item i is denoted by the variable s_i, we can write the following in a CP model:

\mathrm{cumulative}([s_1,s_2,s_3,s_4,s_5],[20,25,40,30,30],[6,4,5,6,4],10)

The above constraint makes sure that the start times s_i are such that the capacity of the oven is never exceeded. To minimize the makespan, we have to minimize the maximum among s_1+6, s_2+4, s_3+5, s_4+6, and s_5+4.

It’s easy to incorporate some real-life details into this model. For example:

  • Not every item will be ready to go into the oven at time zero. After all, you’re making them as you go. To take care of this, add a ready-time r_i (i.e. a lower bound) to the appropriate variable: r_i \leq s_i.
  • If a given item does not occupy the entire oven, but you still prefer to bake it alone, just artificially increase its space requirement c_i to be equal to the oven’s capacity C.
  • If you’re baking both savory and sweet things, you probably don’t want to mix them up in the oven. In that case, simply solve the problem twice.
  • If, for some reason, item i must be finished before item j starts baking (e.g. they need different temperatures), just include the constraint s_i + p_i \leq s_j, where p_i is the baking time of item i.

We could, of course, have approached this problem from an Integer Programming point of view. In that case, we’d have binary variables x_{it} that are equal to 1 if you start baking item i at time t, and equal to zero otherwise. For more details on this formulation, including model files and some pretty tough instances, take a look at the CuSPLIB web page.

In the spirit of holiday baking, I will close with some pictures of past baking jobs ran on my house’s machine (a.k.a. oven). Enjoy! :-)

Key Lime Pie

Carrot Oatmeal Cookies (recipe here)

Sparkling Ginger Chip Cookies (recipe here)

Irish Soda Bread

Six-Seed Soda Bread (recipe here)


11 Comments

Filed under Applications, Constraint Programming, CuSPLIB, Food, Holidays, INFORMS Monthly Blog Challenge, Integer Programming, Modeling, People

Introducing CuSPLIB

A couple of colleagues and I are doing research on single-machine cumulative scheduling problems (CuSP). As part of this effort, we’ll have to create some benchmark instances on which to test our algorithms. Some time ago, I searched around for problem instances and could not find any. People seem to be more interested in the Resource Constrained Project Scheduling Problem (RCPSP), of which the CuSP is a special case/subproblem. One of the experts in the area told me that he was unaware of any standard CuSP benchmarks and that difficult instances were hard to generate. Therefore, I decided to make our instances public, hoping that (i) this could be helpful/useful to someone else out there, and (ii) this could attract more attention to this problem. As a result, CuSPLIB is born! It includes a few pieces of code (instance generator, MIP and CP models), an initial set of 10 instances, and some discussion about integer programming models for the problem. I intend to talk about other (alternative) models and include some references in the near future. The preliminary computational results are interesting and make me believe that it’s not that difficult to find challenging instances. Let me know what you think, and feel free to contribute to CuSPLIB! I’ll be updating it little by little as our research progresses.

2 Comments

Filed under Constraint Programming, CuSPLIB, Integer Programming, Modeling, Research